Test Report: Docker_Linux_crio_arm64 17671

                    
                      199a0e3eaea8884b6f30e504f56bf5d155934cac:2023-11-28:32061
                    
                

Test fail (7/308)

Order failed test Duration
35 TestAddons/parallel/Ingress 169.13
166 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.36
216 TestMultiNode/serial/PingHostFrom2Pods 4.36
237 TestRunningBinaryUpgrade 70.12
240 TestMissingContainerUpgrade 137.25
243 TestPause/serial/SecondStartNoReconfiguration 64.93
245 TestStoppedBinaryUpgrade/Upgrade 88.4
x
+
TestAddons/parallel/Ingress (169.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-663058 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-663058 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-663058 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6b87a866-567d-4559-b246-7095d961bfbf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6b87a866-567d-4559-b246-7095d961bfbf] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.02997596s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-663058 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.662109713s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-663058 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.048154587s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-663058 addons disable ingress-dns --alsologtostderr -v=1: (1.314692575s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-663058 addons disable ingress --alsologtostderr -v=1: (7.903048318s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-663058
helpers_test.go:235: (dbg) docker inspect addons-663058:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbf1091a9d962df5625dbb2b76c5bb158a85613f1d4340d505160834f49dbf81",
	        "Created": "2023-11-28T04:13:47.199821784Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1262462,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:13:47.563485044Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/dbf1091a9d962df5625dbb2b76c5bb158a85613f1d4340d505160834f49dbf81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbf1091a9d962df5625dbb2b76c5bb158a85613f1d4340d505160834f49dbf81/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbf1091a9d962df5625dbb2b76c5bb158a85613f1d4340d505160834f49dbf81/hosts",
	        "LogPath": "/var/lib/docker/containers/dbf1091a9d962df5625dbb2b76c5bb158a85613f1d4340d505160834f49dbf81/dbf1091a9d962df5625dbb2b76c5bb158a85613f1d4340d505160834f49dbf81-json.log",
	        "Name": "/addons-663058",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-663058:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-663058",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bef8e7796ac2817cbef5c44dfa3c60f74b891b968e6fe80b38ecb4407d93d906-init/diff:/var/lib/docker/overlay2/cc610f7b23c869d03809246385f10f80b89207e6d90717a6a4867696f2289751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bef8e7796ac2817cbef5c44dfa3c60f74b891b968e6fe80b38ecb4407d93d906/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bef8e7796ac2817cbef5c44dfa3c60f74b891b968e6fe80b38ecb4407d93d906/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bef8e7796ac2817cbef5c44dfa3c60f74b891b968e6fe80b38ecb4407d93d906/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-663058",
	                "Source": "/var/lib/docker/volumes/addons-663058/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-663058",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-663058",
	                "name.minikube.sigs.k8s.io": "addons-663058",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a31d48f247a18ca0356c4c48a556c24d254c5c67a757c9cd5482dfc7c1cc59a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34324"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34323"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34320"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34322"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34321"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4a31d48f247a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-663058": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dbf1091a9d96",
	                        "addons-663058"
	                    ],
	                    "NetworkID": "d04171a976d48798bfc367c6e5ab55063b42967b98daadfb3ac1cf87e55cbedd",
	                    "EndpointID": "2b9c5508d2a32e52f81f5216d981fdd2e00506a8e832b65e98bfe1a85e9d18af",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-663058 -n addons-663058
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-663058 logs -n 25: (1.648247396s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC | 28 Nov 23 04:13 UTC |
	| delete  | -p download-only-354322                                                                     | download-only-354322   | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC | 28 Nov 23 04:13 UTC |
	| delete  | -p download-only-354322                                                                     | download-only-354322   | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC | 28 Nov 23 04:13 UTC |
	| start   | --download-only -p                                                                          | download-docker-010923 | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |                     |
	|         | download-docker-010923                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-010923                                                                   | download-docker-010923 | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC | 28 Nov 23 04:13 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-303745   | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |                     |
	|         | binary-mirror-303745                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44979                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-303745                                                                     | binary-mirror-303745   | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC | 28 Nov 23 04:13 UTC |
	| addons  | enable dashboard -p                                                                         | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |                     |
	|         | addons-663058                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |                     |
	|         | addons-663058                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-663058 --wait=true                                                                | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC | 28 Nov 23 04:16 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-663058 ip                                                                            | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	| addons  | addons-663058 addons disable                                                                | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | -p addons-663058                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-663058 ssh cat                                                                       | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | /opt/local-path-provisioner/pvc-5d8f3d78-96c7-45ba-a454-abb0965b117c_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-663058 addons disable                                                                | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | addons-663058                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | -p addons-663058                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-663058 addons                                                                        | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-663058 addons                                                                        | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:17 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-663058 addons                                                                        | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	|         | addons-663058                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-663058 ssh curl -s                                                                   | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-663058 ip                                                                            | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:19 UTC | 28 Nov 23 04:19 UTC |
	| addons  | addons-663058 addons disable                                                                | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:19 UTC | 28 Nov 23 04:19 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-663058 addons disable                                                                | addons-663058          | jenkins | v1.32.0 | 28 Nov 23 04:19 UTC | 28 Nov 23 04:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:13:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:13:23.136446 1261998 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:13:23.136731 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:13:23.136740 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:13:23.136747 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:13:23.136992 1261998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:13:23.137488 1261998 out.go:303] Setting JSON to false
	I1128 04:13:23.138590 1261998 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24938,"bootTime":1701119865,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:13:23.138674 1261998 start.go:138] virtualization:  
	I1128 04:13:23.141272 1261998 out.go:177] * [addons-663058] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:13:23.143438 1261998 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:13:23.145603 1261998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:13:23.143604 1261998 notify.go:220] Checking for updates...
	I1128 04:13:23.147490 1261998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:13:23.149357 1261998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:13:23.151562 1261998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:13:23.153467 1261998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:13:23.155543 1261998 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:13:23.180476 1261998 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:13:23.180607 1261998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:13:23.265042 1261998 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-28 04:13:23.254582744 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:13:23.265154 1261998 docker.go:295] overlay module found
	I1128 04:13:23.268824 1261998 out.go:177] * Using the docker driver based on user configuration
	I1128 04:13:23.270835 1261998 start.go:298] selected driver: docker
	I1128 04:13:23.270902 1261998 start.go:902] validating driver "docker" against <nil>
	I1128 04:13:23.270927 1261998 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:13:23.271586 1261998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:13:23.339599 1261998 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-28 04:13:23.330238124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:13:23.339783 1261998 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 04:13:23.340023 1261998 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:13:23.341906 1261998 out.go:177] * Using Docker driver with root privileges
	I1128 04:13:23.343544 1261998 cni.go:84] Creating CNI manager for ""
	I1128 04:13:23.343568 1261998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:13:23.343580 1261998 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 04:13:23.343594 1261998 start_flags.go:323] config:
	{Name:addons-663058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-663058 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:13:23.346048 1261998 out.go:177] * Starting control plane node addons-663058 in cluster addons-663058
	I1128 04:13:23.348045 1261998 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:13:23.349779 1261998 out.go:177] * Pulling base image ...
	I1128 04:13:23.351377 1261998 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:13:23.351450 1261998 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1128 04:13:23.351465 1261998 cache.go:56] Caching tarball of preloaded images
	I1128 04:13:23.351464 1261998 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:13:23.351548 1261998 preload.go:174] Found /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 04:13:23.351559 1261998 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:13:23.351934 1261998 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/config.json ...
	I1128 04:13:23.351964 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/config.json: {Name:mkfa500be885c7775955c90b01badbf3e7fe75c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:23.369183 1261998 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1128 04:13:23.369332 1261998 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1128 04:13:23.369357 1261998 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1128 04:13:23.369368 1261998 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1128 04:13:23.369381 1261998 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1128 04:13:23.369386 1261998 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from local cache
	I1128 04:13:39.404106 1261998 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from cached tarball
	I1128 04:13:39.404146 1261998 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:13:39.404212 1261998 start.go:365] acquiring machines lock for addons-663058: {Name:mkce45085ca9df9348f958ef1cc655918a58a433 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:13:39.404753 1261998 start.go:369] acquired machines lock for "addons-663058" in 507.755µs
	I1128 04:13:39.404793 1261998 start.go:93] Provisioning new machine with config: &{Name:addons-663058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-663058 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:13:39.404884 1261998 start.go:125] createHost starting for "" (driver="docker")
	I1128 04:13:39.407183 1261998 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1128 04:13:39.407444 1261998 start.go:159] libmachine.API.Create for "addons-663058" (driver="docker")
	I1128 04:13:39.407479 1261998 client.go:168] LocalClient.Create starting
	I1128 04:13:39.407609 1261998 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:13:40.112097 1261998 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:13:40.797817 1261998 cli_runner.go:164] Run: docker network inspect addons-663058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 04:13:40.816225 1261998 cli_runner.go:211] docker network inspect addons-663058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 04:13:40.816322 1261998 network_create.go:281] running [docker network inspect addons-663058] to gather additional debugging logs...
	I1128 04:13:40.816347 1261998 cli_runner.go:164] Run: docker network inspect addons-663058
	W1128 04:13:40.834197 1261998 cli_runner.go:211] docker network inspect addons-663058 returned with exit code 1
	I1128 04:13:40.834242 1261998 network_create.go:284] error running [docker network inspect addons-663058]: docker network inspect addons-663058: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-663058 not found
	I1128 04:13:40.834258 1261998 network_create.go:286] output of [docker network inspect addons-663058]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-663058 not found
	
	** /stderr **
	I1128 04:13:40.834355 1261998 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:13:40.853004 1261998 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004bc670}
	I1128 04:13:40.853048 1261998 network_create.go:124] attempt to create docker network addons-663058 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1128 04:13:40.853107 1261998 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-663058 addons-663058
	I1128 04:13:40.927436 1261998 network_create.go:108] docker network addons-663058 192.168.49.0/24 created
	I1128 04:13:40.927465 1261998 kic.go:121] calculated static IP "192.168.49.2" for the "addons-663058" container
	I1128 04:13:40.927543 1261998 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:13:40.945271 1261998 cli_runner.go:164] Run: docker volume create addons-663058 --label name.minikube.sigs.k8s.io=addons-663058 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:13:40.966240 1261998 oci.go:103] Successfully created a docker volume addons-663058
	I1128 04:13:40.966334 1261998 cli_runner.go:164] Run: docker run --rm --name addons-663058-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-663058 --entrypoint /usr/bin/test -v addons-663058:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 04:13:42.890456 1261998 cli_runner.go:217] Completed: docker run --rm --name addons-663058-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-663058 --entrypoint /usr/bin/test -v addons-663058:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.924082517s)
	I1128 04:13:42.890489 1261998 oci.go:107] Successfully prepared a docker volume addons-663058
	I1128 04:13:42.890520 1261998 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:13:42.890540 1261998 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 04:13:42.890616 1261998 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-663058:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 04:13:47.114323 1261998 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-663058:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.223670831s)
	I1128 04:13:47.114360 1261998 kic.go:203] duration metric: took 4.223818 seconds to extract preloaded images to volume
	W1128 04:13:47.114511 1261998 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:13:47.114646 1261998 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:13:47.182116 1261998 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-663058 --name addons-663058 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-663058 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-663058 --network addons-663058 --ip 192.168.49.2 --volume addons-663058:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 04:13:47.574602 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Running}}
	I1128 04:13:47.600556 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:13:47.624585 1261998 cli_runner.go:164] Run: docker exec addons-663058 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:13:47.705466 1261998 oci.go:144] the created container "addons-663058" has a running status.
	I1128 04:13:47.705494 1261998 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa...
	I1128 04:13:47.976995 1261998 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 04:13:48.005379 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:13:48.036825 1261998 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 04:13:48.036852 1261998 kic_runner.go:114] Args: [docker exec --privileged addons-663058 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 04:13:48.142030 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:13:48.172794 1261998 machine.go:88] provisioning docker machine ...
	I1128 04:13:48.172828 1261998 ubuntu.go:169] provisioning hostname "addons-663058"
	I1128 04:13:48.172896 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:48.196860 1261998 main.go:141] libmachine: Using SSH client type: native
	I1128 04:13:48.197296 1261998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34324 <nil> <nil>}
	I1128 04:13:48.197316 1261998 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-663058 && echo "addons-663058" | sudo tee /etc/hostname
	I1128 04:13:48.198044 1261998 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50024->127.0.0.1:34324: read: connection reset by peer
	I1128 04:13:51.344106 1261998 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-663058
	
	I1128 04:13:51.344191 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:51.362954 1261998 main.go:141] libmachine: Using SSH client type: native
	I1128 04:13:51.363371 1261998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34324 <nil> <nil>}
	I1128 04:13:51.363395 1261998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-663058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-663058/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-663058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:13:51.494020 1261998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:13:51.494050 1261998 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:13:51.494074 1261998 ubuntu.go:177] setting up certificates
	I1128 04:13:51.494083 1261998 provision.go:83] configureAuth start
	I1128 04:13:51.494142 1261998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-663058
	I1128 04:13:51.513310 1261998 provision.go:138] copyHostCerts
	I1128 04:13:51.513394 1261998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:13:51.513530 1261998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:13:51.513594 1261998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:13:51.513650 1261998 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.addons-663058 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-663058]
	I1128 04:13:52.029853 1261998 provision.go:172] copyRemoteCerts
	I1128 04:13:52.029927 1261998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:13:52.029971 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:52.051351 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:13:52.147764 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 04:13:52.176968 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:13:52.206902 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1128 04:13:52.236325 1261998 provision.go:86] duration metric: configureAuth took 742.22798ms
	I1128 04:13:52.236403 1261998 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:13:52.236628 1261998 config.go:182] Loaded profile config "addons-663058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:13:52.236805 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:52.255124 1261998 main.go:141] libmachine: Using SSH client type: native
	I1128 04:13:52.255577 1261998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34324 <nil> <nil>}
	I1128 04:13:52.255599 1261998 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:13:52.502575 1261998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:13:52.502599 1261998 machine.go:91] provisioned docker machine in 4.329782606s
	I1128 04:13:52.502609 1261998 client.go:171] LocalClient.Create took 13.095122168s
	I1128 04:13:52.502622 1261998 start.go:167] duration metric: libmachine.API.Create for "addons-663058" took 13.095179875s
	I1128 04:13:52.502635 1261998 start.go:300] post-start starting for "addons-663058" (driver="docker")
	I1128 04:13:52.502645 1261998 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:13:52.502726 1261998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:13:52.502781 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:52.521958 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:13:52.619791 1261998 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:13:52.624016 1261998 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:13:52.624058 1261998 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:13:52.624074 1261998 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:13:52.624087 1261998 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 04:13:52.624098 1261998 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:13:52.624175 1261998 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:13:52.624201 1261998 start.go:303] post-start completed in 121.560452ms
	I1128 04:13:52.624530 1261998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-663058
	I1128 04:13:52.643562 1261998 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/config.json ...
	I1128 04:13:52.643858 1261998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:13:52.643901 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:52.661698 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:13:52.754734 1261998 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:13:52.760688 1261998 start.go:128] duration metric: createHost completed in 13.355786329s
	I1128 04:13:52.760711 1261998 start.go:83] releasing machines lock for "addons-663058", held for 13.355938599s
	I1128 04:13:52.760784 1261998 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-663058
	I1128 04:13:52.778413 1261998 ssh_runner.go:195] Run: cat /version.json
	I1128 04:13:52.778436 1261998 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:13:52.778467 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:52.778498 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:13:52.814071 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:13:52.816714 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:13:52.905382 1261998 ssh_runner.go:195] Run: systemctl --version
	I1128 04:13:53.050857 1261998 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:13:53.199775 1261998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:13:53.205711 1261998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:13:53.228726 1261998 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:13:53.228837 1261998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:13:53.267112 1261998 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1128 04:13:53.267149 1261998 start.go:472] detecting cgroup driver to use...
	I1128 04:13:53.267199 1261998 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:13:53.267276 1261998 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:13:53.285908 1261998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:13:53.299874 1261998 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:13:53.299979 1261998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:13:53.316097 1261998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:13:53.333162 1261998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:13:53.435170 1261998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:13:53.542837 1261998 docker.go:219] disabling docker service ...
	I1128 04:13:53.542956 1261998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:13:53.566632 1261998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:13:53.582083 1261998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:13:53.677677 1261998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:13:53.786525 1261998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:13:53.800535 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:13:53.822173 1261998 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:13:53.822242 1261998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:13:53.834978 1261998 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:13:53.835045 1261998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:13:53.847522 1261998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:13:53.859857 1261998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:13:53.871665 1261998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:13:53.882893 1261998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:13:53.893517 1261998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:13:53.904154 1261998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:13:53.994375 1261998 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:13:54.133619 1261998 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:13:54.133741 1261998 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:13:54.138912 1261998 start.go:540] Will wait 60s for crictl version
	I1128 04:13:54.139012 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:13:54.143892 1261998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:13:54.189988 1261998 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 04:13:54.190152 1261998 ssh_runner.go:195] Run: crio --version
	I1128 04:13:54.241905 1261998 ssh_runner.go:195] Run: crio --version
	I1128 04:13:54.298688 1261998 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1128 04:13:54.300697 1261998 cli_runner.go:164] Run: docker network inspect addons-663058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:13:54.319141 1261998 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1128 04:13:54.324225 1261998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:13:54.338187 1261998 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:13:54.338260 1261998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:13:54.405681 1261998 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:13:54.405726 1261998 crio.go:415] Images already preloaded, skipping extraction
	I1128 04:13:54.405782 1261998 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:13:54.449040 1261998 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:13:54.449064 1261998 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:13:54.449138 1261998 ssh_runner.go:195] Run: crio config
	I1128 04:13:54.505741 1261998 cni.go:84] Creating CNI manager for ""
	I1128 04:13:54.505764 1261998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:13:54.505827 1261998 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:13:54.505856 1261998 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-663058 NodeName:addons-663058 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:13:54.506011 1261998 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-663058"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:13:54.506084 1261998 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-663058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-663058 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:13:54.506153 1261998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:13:54.517083 1261998 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:13:54.517184 1261998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:13:54.527693 1261998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1128 04:13:54.548714 1261998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:13:54.569424 1261998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1128 04:13:54.590594 1261998 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1128 04:13:54.595116 1261998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:13:54.608729 1261998 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058 for IP: 192.168.49.2
	I1128 04:13:54.608760 1261998 certs.go:190] acquiring lock for shared ca certs: {Name:mka7cf71bac87c390cad9bb03b67c849db7103ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:54.609221 1261998 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key
	I1128 04:13:55.110454 1261998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt ...
	I1128 04:13:55.110488 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt: {Name:mke56a7ea004e4d760238309d8de71fe22492063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:55.111118 1261998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key ...
	I1128 04:13:55.111135 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key: {Name:mk6f5b9d077d43bb9fc8bae8d673e780022eb94d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:55.111611 1261998 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key
	I1128 04:13:55.553131 1261998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt ...
	I1128 04:13:55.553169 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt: {Name:mkf3c3171689d8de7e0ded6f8ac81420290b589c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:55.553361 1261998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key ...
	I1128 04:13:55.553374 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key: {Name:mk6e974d1063d8d9bcb38fce19c2f55a330ee43a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:55.553891 1261998 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.key
	I1128 04:13:55.553912 1261998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt with IP's: []
	I1128 04:13:55.750985 1261998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt ...
	I1128 04:13:55.751018 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: {Name:mkae819e345b3edbeb73a2446bef8870212ff66d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:55.751599 1261998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.key ...
	I1128 04:13:55.751616 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.key: {Name:mk96120eb9999fe42bfd36a5041b4fb54f6e5f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:55.752069 1261998 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.key.dd3b5fb2
	I1128 04:13:55.752091 1261998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 04:13:56.012459 1261998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.crt.dd3b5fb2 ...
	I1128 04:13:56.012492 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.crt.dd3b5fb2: {Name:mk8dd89179df761d99252b9cb20565cb3666bbc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:56.013129 1261998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.key.dd3b5fb2 ...
	I1128 04:13:56.013149 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.key.dd3b5fb2: {Name:mk89e46428f24f68490f8d57568a61a7011a1c0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:56.013630 1261998 certs.go:337] copying /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.crt
	I1128 04:13:56.013724 1261998 certs.go:341] copying /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.key
	I1128 04:13:56.013776 1261998 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.key
	I1128 04:13:56.013806 1261998 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.crt with IP's: []
	I1128 04:13:56.323897 1261998 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.crt ...
	I1128 04:13:56.323930 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.crt: {Name:mkf14c7de0ec9f1b7cb3b1c2a12351d592d4ca34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:56.324508 1261998 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.key ...
	I1128 04:13:56.324530 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.key: {Name:mkb27737888a8cf47b311b56be25392b708746ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:13:56.325328 1261998 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:13:56.325378 1261998 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem (1082 bytes)
	I1128 04:13:56.325410 1261998 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:13:56.325441 1261998 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem (1679 bytes)
	I1128 04:13:56.326041 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:13:56.356415 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 04:13:56.385814 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:13:56.414846 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 04:13:56.443729 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:13:56.472590 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:13:56.501635 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:13:56.530543 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1128 04:13:56.560951 1261998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:13:56.589950 1261998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:13:56.611622 1261998 ssh_runner.go:195] Run: openssl version
	I1128 04:13:56.618806 1261998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:13:56.631040 1261998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:13:56.635741 1261998 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:13:56.635828 1261998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:13:56.644731 1261998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:13:56.656896 1261998 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:13:56.661590 1261998 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 04:13:56.661641 1261998 kubeadm.go:404] StartCluster: {Name:addons-663058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-663058 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:13:56.661732 1261998 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:13:56.661796 1261998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:13:56.710480 1261998 cri.go:89] found id: ""
	I1128 04:13:56.710559 1261998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:13:56.722157 1261998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:13:56.733277 1261998 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1128 04:13:56.733351 1261998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:13:56.744934 1261998 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:13:56.745005 1261998 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1128 04:13:56.803984 1261998 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:13:56.804280 1261998 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:13:56.854110 1261998 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1128 04:13:56.854187 1261998 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1128 04:13:56.854228 1261998 kubeadm.go:322] OS: Linux
	I1128 04:13:56.854277 1261998 kubeadm.go:322] CGROUPS_CPU: enabled
	I1128 04:13:56.854327 1261998 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1128 04:13:56.854375 1261998 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1128 04:13:56.854424 1261998 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1128 04:13:56.854473 1261998 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1128 04:13:56.854526 1261998 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1128 04:13:56.854571 1261998 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1128 04:13:56.854620 1261998 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1128 04:13:56.854667 1261998 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1128 04:13:56.934701 1261998 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:13:56.934832 1261998 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:13:56.934943 1261998 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:13:57.213123 1261998 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:13:57.217820 1261998 out.go:204]   - Generating certificates and keys ...
	I1128 04:13:57.217933 1261998 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:13:57.218009 1261998 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:13:58.342954 1261998 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 04:13:58.971982 1261998 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 04:13:59.529515 1261998 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 04:13:59.994373 1261998 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 04:14:00.916018 1261998 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 04:14:00.916422 1261998 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-663058 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1128 04:14:01.332401 1261998 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 04:14:01.332812 1261998 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-663058 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1128 04:14:02.092024 1261998 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 04:14:02.815553 1261998 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 04:14:03.043446 1261998 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 04:14:03.043838 1261998 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:14:03.525118 1261998 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:14:03.821982 1261998 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:14:04.068690 1261998 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:14:04.452053 1261998 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:14:04.452691 1261998 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:14:04.455579 1261998 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:14:04.458367 1261998 out.go:204]   - Booting up control plane ...
	I1128 04:14:04.458494 1261998 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:14:04.458568 1261998 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:14:04.458630 1261998 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:14:04.471111 1261998 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:14:04.472057 1261998 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:14:04.472341 1261998 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:14:04.572929 1261998 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:14:11.576179 1261998 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003357 seconds
	I1128 04:14:11.576508 1261998 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:14:11.590967 1261998 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:14:12.117954 1261998 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:14:12.118177 1261998 kubeadm.go:322] [mark-control-plane] Marking the node addons-663058 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:14:12.629017 1261998 kubeadm.go:322] [bootstrap-token] Using token: 2bv9fd.ahk6e8w1pp2ra533
	I1128 04:14:12.630834 1261998 out.go:204]   - Configuring RBAC rules ...
	I1128 04:14:12.630955 1261998 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:14:12.637717 1261998 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:14:12.647642 1261998 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:14:12.651766 1261998 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:14:12.655777 1261998 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:14:12.660030 1261998 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:14:12.677083 1261998 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:14:12.944804 1261998 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:14:13.091092 1261998 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:14:13.092314 1261998 kubeadm.go:322] 
	I1128 04:14:13.092385 1261998 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:14:13.092391 1261998 kubeadm.go:322] 
	I1128 04:14:13.092466 1261998 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:14:13.092472 1261998 kubeadm.go:322] 
	I1128 04:14:13.092496 1261998 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:14:13.092551 1261998 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:14:13.092602 1261998 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:14:13.092607 1261998 kubeadm.go:322] 
	I1128 04:14:13.092681 1261998 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:14:13.092689 1261998 kubeadm.go:322] 
	I1128 04:14:13.092733 1261998 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:14:13.092738 1261998 kubeadm.go:322] 
	I1128 04:14:13.092786 1261998 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:14:13.092856 1261998 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:14:13.092920 1261998 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:14:13.092925 1261998 kubeadm.go:322] 
	I1128 04:14:13.093003 1261998 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:14:13.093075 1261998 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:14:13.093080 1261998 kubeadm.go:322] 
	I1128 04:14:13.093158 1261998 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2bv9fd.ahk6e8w1pp2ra533 \
	I1128 04:14:13.093255 1261998 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c \
	I1128 04:14:13.093274 1261998 kubeadm.go:322] 	--control-plane 
	I1128 04:14:13.093279 1261998 kubeadm.go:322] 
	I1128 04:14:13.093358 1261998 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:14:13.093363 1261998 kubeadm.go:322] 
	I1128 04:14:13.093439 1261998 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2bv9fd.ahk6e8w1pp2ra533 \
	I1128 04:14:13.093534 1261998 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c 
	I1128 04:14:13.097899 1261998 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1128 04:14:13.098089 1261998 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:14:13.098128 1261998 cni.go:84] Creating CNI manager for ""
	I1128 04:14:13.098150 1261998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:14:13.102053 1261998 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 04:14:13.104012 1261998 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:14:13.132746 1261998 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 04:14:13.132775 1261998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:14:13.165540 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:14:14.083607 1261998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:14:14.083739 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:14.083819 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=addons-663058 minikube.k8s.io/updated_at=2023_11_28T04_14_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:14.281673 1261998 ops.go:34] apiserver oom_adj: -16
	I1128 04:14:14.281804 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:14.377335 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:14.976245 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:15.475821 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:15.976111 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:16.476348 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:16.976438 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:17.476573 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:17.976380 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:18.476499 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:18.976125 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:19.476428 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:19.975746 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:20.475736 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:20.976723 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:21.475767 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:21.975766 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:22.475745 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:22.975771 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:23.476409 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:23.976433 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:24.476554 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:24.976383 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:25.475776 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:25.975741 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:26.475749 1261998 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:14:26.576783 1261998 kubeadm.go:1081] duration metric: took 12.493083591s to wait for elevateKubeSystemPrivileges.
	I1128 04:14:26.576813 1261998 kubeadm.go:406] StartCluster complete in 29.915176752s
	I1128 04:14:26.576832 1261998 settings.go:142] acquiring lock: {Name:mk51bec1305a61d1e5f21881e1d4b01dfafff7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:14:26.576965 1261998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:14:26.577378 1261998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/kubeconfig: {Name:mkdd24900acdf0a7a11c60f4e6d81c9963f4153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:14:26.578139 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:14:26.578418 1261998 config.go:182] Loaded profile config "addons-663058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:14:26.578525 1261998 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1128 04:14:26.578606 1261998 addons.go:69] Setting volumesnapshots=true in profile "addons-663058"
	I1128 04:14:26.578622 1261998 addons.go:231] Setting addon volumesnapshots=true in "addons-663058"
	I1128 04:14:26.578680 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.579145 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.579647 1261998 addons.go:69] Setting cloud-spanner=true in profile "addons-663058"
	I1128 04:14:26.579672 1261998 addons.go:231] Setting addon cloud-spanner=true in "addons-663058"
	I1128 04:14:26.579719 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.580134 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.581277 1261998 addons.go:69] Setting metrics-server=true in profile "addons-663058"
	I1128 04:14:26.581310 1261998 addons.go:231] Setting addon metrics-server=true in "addons-663058"
	I1128 04:14:26.581346 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.581759 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.582135 1261998 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-663058"
	I1128 04:14:26.582159 1261998 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-663058"
	I1128 04:14:26.582204 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.582603 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.586650 1261998 addons.go:69] Setting registry=true in profile "addons-663058"
	I1128 04:14:26.586810 1261998 addons.go:231] Setting addon registry=true in "addons-663058"
	I1128 04:14:26.586918 1261998 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-663058"
	I1128 04:14:26.586964 1261998 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-663058"
	I1128 04:14:26.586981 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.587002 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.587430 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.588947 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.593690 1261998 addons.go:69] Setting storage-provisioner=true in profile "addons-663058"
	I1128 04:14:26.593731 1261998 addons.go:231] Setting addon storage-provisioner=true in "addons-663058"
	I1128 04:14:26.593783 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.594226 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.594366 1261998 addons.go:69] Setting default-storageclass=true in profile "addons-663058"
	I1128 04:14:26.594384 1261998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-663058"
	I1128 04:14:26.594622 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.612640 1261998 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-663058"
	I1128 04:14:26.612742 1261998 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-663058"
	I1128 04:14:26.613132 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.614603 1261998 addons.go:69] Setting gcp-auth=true in profile "addons-663058"
	I1128 04:14:26.614672 1261998 mustload.go:65] Loading cluster: addons-663058
	I1128 04:14:26.614895 1261998 config.go:182] Loaded profile config "addons-663058": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:14:26.615187 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.628750 1261998 addons.go:69] Setting ingress=true in profile "addons-663058"
	I1128 04:14:26.628839 1261998 addons.go:231] Setting addon ingress=true in "addons-663058"
	I1128 04:14:26.628928 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.629491 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.640967 1261998 addons.go:69] Setting ingress-dns=true in profile "addons-663058"
	I1128 04:14:26.641042 1261998 addons.go:231] Setting addon ingress-dns=true in "addons-663058"
	I1128 04:14:26.641131 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.641615 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.656836 1261998 addons.go:69] Setting inspektor-gadget=true in profile "addons-663058"
	I1128 04:14:26.656915 1261998 addons.go:231] Setting addon inspektor-gadget=true in "addons-663058"
	I1128 04:14:26.656993 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.657479 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.712389 1261998 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1128 04:14:26.721638 1261998 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1128 04:14:26.721864 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1128 04:14:26.722049 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.772490 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1128 04:14:26.774708 1261998 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1128 04:14:26.774734 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1128 04:14:26.774803 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.807561 1261998 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1128 04:14:26.809441 1261998 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1128 04:14:26.809464 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1128 04:14:26.809551 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.813412 1261998 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1128 04:14:26.814968 1261998 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:14:26.814996 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:14:26.815075 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.824926 1261998 addons.go:231] Setting addon default-storageclass=true in "addons-663058"
	I1128 04:14:26.824971 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.825422 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.827812 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1128 04:14:26.826661 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:14:26.827796 1261998 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-663058"
	I1128 04:14:26.857494 1261998 out.go:177]   - Using image docker.io/registry:2.8.3
	I1128 04:14:26.859531 1261998 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1128 04:14:26.861618 1261998 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1128 04:14:26.861640 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1128 04:14:26.861716 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.872105 1261998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1128 04:14:26.870223 1261998 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:14:26.870930 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.876080 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1128 04:14:26.874591 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:26.900394 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1128 04:14:26.902201 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1128 04:14:26.903989 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1128 04:14:26.905582 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1128 04:14:26.908464 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1128 04:14:26.906324 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:26.906395 1261998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1128 04:14:26.906496 1261998 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:14:26.912411 1261998 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1128 04:14:26.910892 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:14:26.911023 1261998 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1128 04:14:26.912046 1261998 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-663058" context rescaled to 1 replicas
	I1128 04:14:26.914547 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1128 04:14:26.914615 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.916291 1261998 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1128 04:14:26.916316 1261998 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:14:26.918284 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1128 04:14:26.918295 1261998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1128 04:14:26.922076 1261998 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1128 04:14:26.922095 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1128 04:14:26.922150 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.920956 1261998 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1128 04:14:26.940872 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1128 04:14:26.940960 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.959754 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1128 04:14:26.959782 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1128 04:14:26.959849 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:26.989340 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.001541 1261998 out.go:177] * Verifying Kubernetes components...
	I1128 04:14:27.003912 1261998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:14:26.921235 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:27.017537 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.032208 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.035759 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.092261 1261998 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1128 04:14:27.094309 1261998 out.go:177]   - Using image docker.io/busybox:stable
	I1128 04:14:27.103621 1261998 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1128 04:14:27.103661 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1128 04:14:27.103762 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:27.101024 1261998 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:14:27.106716 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:14:27.106805 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:27.119695 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.165057 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.206126 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.206546 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.228776 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.238134 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.239218 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.259834 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:27.508350 1261998 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:14:27.508375 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1128 04:14:27.552916 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1128 04:14:27.558345 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1128 04:14:27.609294 1261998 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1128 04:14:27.609323 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1128 04:14:27.625735 1261998 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1128 04:14:27.625758 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1128 04:14:27.664653 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1128 04:14:27.689504 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1128 04:14:27.707930 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:14:27.719006 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1128 04:14:27.719081 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1128 04:14:27.727475 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1128 04:14:27.737451 1261998 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:14:27.737529 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:14:27.758961 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:14:27.783620 1261998 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1128 04:14:27.783693 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1128 04:14:27.857256 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1128 04:14:27.857329 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1128 04:14:27.861932 1261998 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1128 04:14:27.862004 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1128 04:14:27.866897 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1128 04:14:27.866980 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1128 04:14:27.932675 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1128 04:14:27.963193 1261998 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:14:27.963270 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:14:28.014824 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1128 04:14:28.014899 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1128 04:14:28.071211 1261998 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1128 04:14:28.071285 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1128 04:14:28.104108 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1128 04:14:28.104185 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1128 04:14:28.172264 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:14:28.230833 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1128 04:14:28.230896 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1128 04:14:28.250676 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1128 04:14:28.250752 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1128 04:14:28.304552 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1128 04:14:28.304616 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1128 04:14:28.381515 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1128 04:14:28.381590 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1128 04:14:28.475358 1261998 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1128 04:14:28.475432 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1128 04:14:28.562794 1261998 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1128 04:14:28.562868 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1128 04:14:28.570978 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1128 04:14:28.571041 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1128 04:14:28.625257 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1128 04:14:28.625334 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1128 04:14:28.632720 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1128 04:14:28.640909 1261998 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1128 04:14:28.640934 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1128 04:14:28.731757 1261998 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1128 04:14:28.731790 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1128 04:14:28.752668 1261998 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1128 04:14:28.752694 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1128 04:14:28.839910 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1128 04:14:28.857614 1261998 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1128 04:14:28.857638 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1128 04:14:28.975299 1261998 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1128 04:14:28.975341 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1128 04:14:29.232765 1261998 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1128 04:14:29.232792 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1128 04:14:29.417677 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1128 04:14:30.019804 1261998 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.164140364s)
	I1128 04:14:30.019967 1261998 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1128 04:14:30.019907 1261998 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.015967532s)
	I1128 04:14:30.020943 1261998 node_ready.go:35] waiting up to 6m0s for node "addons-663058" to be "Ready" ...
	I1128 04:14:31.121265 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.568311192s)
	I1128 04:14:31.121348 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.562979971s)
	I1128 04:14:31.616176 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.951457985s)
	I1128 04:14:32.140123 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:32.796104 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.088083887s)
	I1128 04:14:32.796409 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.03737103s)
	I1128 04:14:32.796137 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.106550043s)
	I1128 04:14:32.796645 1261998 addons.go:467] Verifying addon ingress=true in "addons-663058"
	I1128 04:14:32.799240 1261998 out.go:177] * Verifying ingress addon...
	I1128 04:14:32.796694 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.624402862s)
	I1128 04:14:32.796774 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.164024457s)
	I1128 04:14:32.796614 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.863864708s)
	I1128 04:14:32.796821 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.956881721s)
	I1128 04:14:32.796181 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.068635788s)
	W1128 04:14:32.801278 1261998 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1128 04:14:32.801309 1261998 retry.go:31] will retry after 328.925108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1128 04:14:32.802099 1261998 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1128 04:14:32.802279 1261998 addons.go:467] Verifying addon metrics-server=true in "addons-663058"
	I1128 04:14:32.802303 1261998 addons.go:467] Verifying addon registry=true in "addons-663058"
	I1128 04:14:32.805165 1261998 out.go:177] * Verifying registry addon...
	I1128 04:14:32.808722 1261998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1128 04:14:32.812649 1261998 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1128 04:14:32.812802 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:32.819428 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1128 04:14:32.820555 1261998 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1128 04:14:32.822091 1261998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1128 04:14:32.822151 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:32.825926 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:33.130463 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1128 04:14:33.137281 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.719520771s)
	I1128 04:14:33.139671 1261998 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-663058"
	I1128 04:14:33.142170 1261998 out.go:177] * Verifying csi-hostpath-driver addon...
	I1128 04:14:33.145426 1261998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1128 04:14:33.164242 1261998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1128 04:14:33.164322 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:33.179641 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:33.341436 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:33.342786 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:33.691373 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:33.884268 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:33.884589 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:34.209814 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:34.325094 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:34.330288 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:34.577903 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:34.688958 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:34.835347 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:34.845859 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:34.980994 1261998 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.85047888s)
	I1128 04:14:35.187713 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:35.327304 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:35.331598 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:35.379914 1261998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1128 04:14:35.380060 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:35.434299 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:35.584113 1261998 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1128 04:14:35.611334 1261998 addons.go:231] Setting addon gcp-auth=true in "addons-663058"
	I1128 04:14:35.611397 1261998 host.go:66] Checking if "addons-663058" exists ...
	I1128 04:14:35.611876 1261998 cli_runner.go:164] Run: docker container inspect addons-663058 --format={{.State.Status}}
	I1128 04:14:35.634685 1261998 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1128 04:14:35.634741 1261998 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663058
	I1128 04:14:35.661628 1261998 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34324 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/addons-663058/id_rsa Username:docker}
	I1128 04:14:35.684058 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:35.767749 1261998 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1128 04:14:35.769818 1261998 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1128 04:14:35.771895 1261998 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1128 04:14:35.771955 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1128 04:14:35.824723 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:35.831069 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:35.842859 1261998 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1128 04:14:35.842931 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1128 04:14:35.910742 1261998 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1128 04:14:35.910768 1261998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1128 04:14:35.963353 1261998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1128 04:14:36.184744 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:36.325031 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:36.329869 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:36.701347 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:36.805885 1261998 addons.go:467] Verifying addon gcp-auth=true in "addons-663058"
	I1128 04:14:36.808362 1261998 out.go:177] * Verifying gcp-auth addon...
	I1128 04:14:36.810905 1261998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1128 04:14:36.841304 1261998 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1128 04:14:36.841332 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:36.878256 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:36.879041 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:36.895644 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:37.077486 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:37.187487 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:37.325580 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:37.332432 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:37.400592 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:37.684989 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:37.824130 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:37.830718 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:37.899906 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:38.184133 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:38.330859 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:38.332207 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:38.399179 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:38.686130 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:38.824556 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:38.832219 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:38.901224 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:39.077646 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:39.186879 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:39.324963 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:39.330420 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:39.400264 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:39.687686 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:39.823929 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:39.830303 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:39.900322 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:40.185801 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:40.324804 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:40.331262 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:40.399355 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:40.685769 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:40.824188 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:40.830283 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:40.899378 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:41.184173 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:41.324040 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:41.329923 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:41.400434 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:41.576651 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:41.684325 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:41.823874 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:41.829767 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:41.899962 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:42.185305 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:42.325464 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:42.331300 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:42.399920 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:42.685867 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:42.824582 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:42.830727 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:42.899494 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:43.184153 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:43.325123 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:43.330386 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:43.399614 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:43.576744 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:43.684825 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:43.824040 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:43.830126 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:43.899450 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:44.188308 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:44.324579 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:44.330684 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:44.399397 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:44.685329 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:44.823584 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:44.830656 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:44.899466 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:45.185388 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:45.325414 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:45.331874 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:45.399704 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:45.684182 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:45.824563 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:45.830490 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:45.900516 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:46.076252 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:46.184155 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:46.324166 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:46.330202 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:46.400036 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:46.684290 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:46.824106 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:46.830223 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:46.899388 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:47.184974 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:47.324048 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:47.330317 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:47.399829 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:47.687261 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:47.824185 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:47.830458 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:47.900390 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:48.077248 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:48.184481 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:48.323755 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:48.331007 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:48.399451 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:48.684256 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:48.824700 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:48.830919 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:48.900070 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:49.184609 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:49.324433 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:49.330748 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:49.399408 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:49.685428 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:49.823711 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:49.831275 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:49.899747 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:50.185307 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:50.323844 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:50.330234 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:50.399589 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:50.576272 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:50.685057 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:50.823598 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:50.830702 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:50.899920 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:51.184031 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:51.324527 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:51.331887 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:51.399235 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:51.684834 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:51.824607 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:51.830924 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:51.899579 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:52.184319 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:52.325435 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:52.332387 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:52.400950 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:52.577184 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:52.685150 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:52.823825 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:52.830880 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:52.899479 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:53.184177 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:53.323585 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:53.330847 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:53.399483 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:53.684506 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:53.823759 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:53.830778 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:53.900153 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:54.184317 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:54.323535 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:54.330631 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:54.400139 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:54.684715 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:54.823794 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:54.829658 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:54.902338 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:55.078418 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:55.184739 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:55.324517 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:55.330601 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:55.399817 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:55.684972 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:55.823870 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:55.830844 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:55.900096 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:56.186118 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:56.323891 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:56.330466 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:56.399855 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:56.685107 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:56.823904 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:56.830031 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:56.899556 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:57.184914 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:57.325898 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:57.330879 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:57.400159 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:57.576943 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:57.685802 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:57.824102 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:57.830061 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:57.899238 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:58.185014 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:58.324447 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:58.330501 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:58.399811 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:58.685084 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:58.824566 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:58.830725 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:58.899899 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:59.184258 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:59.324003 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:59.330065 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:59.399795 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:14:59.577021 1261998 node_ready.go:58] node "addons-663058" has status "Ready":"False"
	I1128 04:14:59.685673 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:14:59.841356 1261998 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1128 04:14:59.841384 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:14:59.842791 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:14:59.936077 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:00.164709 1261998 node_ready.go:49] node "addons-663058" has status "Ready":"True"
	I1128 04:15:00.164745 1261998 node_ready.go:38] duration metric: took 30.143728708s waiting for node "addons-663058" to be "Ready" ...
	I1128 04:15:00.164758 1261998 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:15:00.325391 1261998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1128 04:15:00.325430 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:00.349160 1261998 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fv8lf" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:00.453696 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:00.479851 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:00.497621 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:00.693891 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:00.828337 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:00.836291 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:00.906069 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:01.189030 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:01.329395 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:01.334265 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:01.400889 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:01.687795 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:01.825710 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:01.832437 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:01.901280 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:02.020995 1261998 pod_ready.go:92] pod "coredns-5dd5756b68-fv8lf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:02.021092 1261998 pod_ready.go:81] duration metric: took 1.671887619s waiting for pod "coredns-5dd5756b68-fv8lf" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.021157 1261998 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.050271 1261998 pod_ready.go:92] pod "etcd-addons-663058" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:02.050351 1261998 pod_ready.go:81] duration metric: took 29.164413ms waiting for pod "etcd-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.050386 1261998 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.069522 1261998 pod_ready.go:92] pod "kube-apiserver-addons-663058" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:02.069608 1261998 pod_ready.go:81] duration metric: took 19.198477ms waiting for pod "kube-apiserver-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.069637 1261998 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.082349 1261998 pod_ready.go:92] pod "kube-controller-manager-addons-663058" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:02.082430 1261998 pod_ready.go:81] duration metric: took 12.757075ms waiting for pod "kube-controller-manager-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.082460 1261998 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lddbr" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.095376 1261998 pod_ready.go:92] pod "kube-proxy-lddbr" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:02.095457 1261998 pod_ready.go:81] duration metric: took 12.92071ms waiting for pod "kube-proxy-lddbr" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.095488 1261998 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.186717 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:02.324714 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:02.331874 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:02.400111 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:02.477813 1261998 pod_ready.go:92] pod "kube-scheduler-addons-663058" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:02.477889 1261998 pod_ready.go:81] duration metric: took 382.380291ms waiting for pod "kube-scheduler-addons-663058" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.477917 1261998 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:02.687962 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:02.841948 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:02.843835 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:02.900451 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:03.186660 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:03.325739 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:03.332198 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:03.400487 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:03.689530 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:03.827685 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:03.837234 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:03.903774 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:04.188783 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:04.323651 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:04.331552 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:04.400401 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:04.686361 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:04.813311 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:04.825681 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:04.832064 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:04.900941 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:05.190545 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:05.332424 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:05.349431 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:05.412194 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:05.689602 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:05.824877 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:05.836002 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:05.899649 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:06.188376 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:06.326622 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:06.343194 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:06.400165 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:06.686021 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:06.825376 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:06.832103 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:06.899685 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:07.185701 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:07.284397 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:07.324818 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:07.331574 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:07.400540 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:07.685634 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:07.824230 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:07.830907 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:07.900361 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:08.185908 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:08.325517 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:08.333370 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:08.400579 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:08.686088 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:08.825058 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:08.838056 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:08.901119 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:09.187498 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:09.324322 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:09.331805 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:09.400097 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:09.685280 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:09.783158 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:09.824860 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:09.831204 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:09.900321 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:10.186953 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:10.324373 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:10.331776 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:10.403737 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:10.686106 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:10.823802 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:10.831195 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:10.900322 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:11.186131 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:11.325002 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:11.331901 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:11.399815 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:11.687190 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:11.785387 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:11.824808 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:11.833463 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:11.905349 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:12.186575 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:12.324402 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:12.331479 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:12.399775 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:12.689830 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:12.825399 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:12.831615 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:12.899413 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:13.186077 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:13.324846 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:13.331706 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:13.400280 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:13.686359 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:13.824066 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:13.831130 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:13.901253 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:14.187996 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:14.285021 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:14.325312 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:14.334570 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:14.399596 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:14.686760 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:14.825134 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:14.834475 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:14.900971 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:15.191295 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:15.326267 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:15.333535 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:15.400498 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:15.686730 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:15.825000 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:15.834133 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:15.900767 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:16.187141 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:16.286676 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:16.326735 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:16.335393 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:16.400674 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:16.691853 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:16.827143 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:16.833646 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:16.900849 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:17.186704 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:17.324680 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:17.331108 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:17.404701 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:17.687869 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:17.824161 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:17.831018 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:17.900118 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:18.185437 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:18.324876 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:18.331197 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:18.399999 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:18.685967 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:18.784769 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:18.824756 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:18.831274 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:18.900493 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:19.185944 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:19.324966 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:19.330840 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:19.400015 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:19.686228 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:19.824889 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:19.832010 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:19.899875 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:20.186751 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:20.324473 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:20.331576 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:20.400320 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:20.687441 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:20.785520 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:20.830377 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:20.836527 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:20.900099 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:21.185121 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:21.324874 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:21.331729 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:21.400446 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:21.686580 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:21.828822 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:21.832956 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:21.900150 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:22.185606 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:22.324177 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:22.333683 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:22.402954 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:22.687142 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:22.825104 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:22.831828 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:22.899936 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:23.185736 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:23.284651 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:23.327504 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:23.332348 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:23.400962 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:23.690479 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:23.825418 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:23.832512 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:23.899490 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:24.185932 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:24.325766 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:24.358804 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:24.399663 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:24.693413 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:24.824875 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:24.837040 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:24.899907 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:25.186504 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:25.325118 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:25.330872 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:25.399660 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:25.686573 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:25.788408 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:25.824288 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:25.842790 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:25.912437 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:26.186841 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:26.325153 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:26.330831 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:26.400110 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:26.686647 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:26.824999 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:26.839199 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:26.900511 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:27.194440 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:27.328205 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:27.338089 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:27.402490 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:27.705874 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:27.831341 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:27.836319 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:27.900172 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:28.187953 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:28.295200 1261998 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:15:28.325075 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:28.330895 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:28.399911 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:28.687367 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:28.861060 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:28.891071 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:28.911326 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:29.188761 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:29.328691 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:29.338963 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:29.402095 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:29.695076 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:29.825546 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:29.839335 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:29.912205 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:30.211250 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:30.286823 1261998 pod_ready.go:92] pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:30.286897 1261998 pod_ready.go:81] duration metric: took 27.808957182s waiting for pod "metrics-server-7c66d45ddc-xqlzx" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:30.286924 1261998 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5rx4b" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:30.299688 1261998 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-5rx4b" in "kube-system" namespace has status "Ready":"True"
	I1128 04:15:30.299716 1261998 pod_ready.go:81] duration metric: took 12.770179ms waiting for pod "nvidia-device-plugin-daemonset-5rx4b" in "kube-system" namespace to be "Ready" ...
	I1128 04:15:30.299738 1261998 pod_ready.go:38] duration metric: took 30.134965276s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:15:30.299754 1261998 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:15:30.299794 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:15:30.299859 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:15:30.353059 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:30.374650 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:30.399730 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:30.513933 1261998 cri.go:89] found id: "5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2"
	I1128 04:15:30.513964 1261998 cri.go:89] found id: ""
	I1128 04:15:30.513973 1261998 logs.go:284] 1 containers: [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2]
	I1128 04:15:30.514027 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:30.540268 1261998 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:15:30.540349 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:15:30.636584 1261998 cri.go:89] found id: "036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f"
	I1128 04:15:30.636617 1261998 cri.go:89] found id: ""
	I1128 04:15:30.636626 1261998 logs.go:284] 1 containers: [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f]
	I1128 04:15:30.636708 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:30.653518 1261998 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:15:30.653607 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:15:30.702929 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:30.760548 1261998 cri.go:89] found id: "bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214"
	I1128 04:15:30.760581 1261998 cri.go:89] found id: ""
	I1128 04:15:30.760590 1261998 logs.go:284] 1 containers: [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214]
	I1128 04:15:30.760663 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:30.768485 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:15:30.768570 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:15:30.826199 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:30.836844 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:30.863879 1261998 cri.go:89] found id: "8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e"
	I1128 04:15:30.863913 1261998 cri.go:89] found id: ""
	I1128 04:15:30.863922 1261998 logs.go:284] 1 containers: [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e]
	I1128 04:15:30.863990 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:30.872798 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:15:30.872882 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:15:30.940184 1261998 cri.go:89] found id: "2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa"
	I1128 04:15:30.940219 1261998 cri.go:89] found id: ""
	I1128 04:15:30.940227 1261998 logs.go:284] 1 containers: [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa]
	I1128 04:15:30.940290 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:30.947386 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:30.947881 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:15:30.947957 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:15:31.048472 1261998 cri.go:89] found id: "7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4"
	I1128 04:15:31.048498 1261998 cri.go:89] found id: ""
	I1128 04:15:31.048507 1261998 logs.go:284] 1 containers: [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4]
	I1128 04:15:31.048570 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:31.065324 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:15:31.065410 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:15:31.148196 1261998 cri.go:89] found id: "3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8"
	I1128 04:15:31.148230 1261998 cri.go:89] found id: ""
	I1128 04:15:31.148239 1261998 logs.go:284] 1 containers: [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8]
	I1128 04:15:31.148302 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:31.163472 1261998 logs.go:123] Gathering logs for dmesg ...
	I1128 04:15:31.163501 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:15:31.187693 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:31.202703 1261998 logs.go:123] Gathering logs for kube-apiserver [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2] ...
	I1128 04:15:31.202885 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2"
	I1128 04:15:31.288546 1261998 logs.go:123] Gathering logs for kube-proxy [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa] ...
	I1128 04:15:31.288623 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa"
	I1128 04:15:31.327878 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:31.335612 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:31.381422 1261998 logs.go:123] Gathering logs for kube-controller-manager [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4] ...
	I1128 04:15:31.381453 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4"
	I1128 04:15:31.400407 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:31.515303 1261998 logs.go:123] Gathering logs for kubelet ...
	I1128 04:15:31.515378 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1128 04:15:31.592456 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324070    1341 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:31.595501 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324111    1341 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:31.595750 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324111    1341 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:31.595973 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324137    1341 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	I1128 04:15:31.638127 1261998 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:15:31.638209 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:15:31.686772 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:31.824272 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:31.832629 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:31.899597 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:31.929572 1261998 logs.go:123] Gathering logs for etcd [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f] ...
	I1128 04:15:31.929646 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f"
	I1128 04:15:32.024730 1261998 logs.go:123] Gathering logs for coredns [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214] ...
	I1128 04:15:32.024809 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214"
	I1128 04:15:32.115128 1261998 logs.go:123] Gathering logs for kube-scheduler [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e] ...
	I1128 04:15:32.115168 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e"
	I1128 04:15:32.195877 1261998 logs.go:123] Gathering logs for kindnet [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8] ...
	I1128 04:15:32.195918 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8"
	I1128 04:15:32.204542 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:32.290939 1261998 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:15:32.290968 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:15:32.327386 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:32.330540 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:32.397765 1261998 logs.go:123] Gathering logs for container status ...
	I1128 04:15:32.397795 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:15:32.400859 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:32.484328 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:15:32.484457 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1128 04:15:32.484525 1261998 out.go:239] X Problems detected in kubelet:
	W1128 04:15:32.484572 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324070    1341 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:32.484606 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324111    1341 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:32.484644 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324111    1341 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:32.484725 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324137    1341 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	I1128 04:15:32.484759 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:15:32.484781 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:15:32.686332 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:32.830545 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:32.833211 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:32.903840 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:33.186081 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:33.324520 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:33.332063 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:33.400167 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:33.686452 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:33.824945 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:33.830850 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:33.903403 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:34.185618 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:34.325666 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:34.331573 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:34.402947 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:34.687456 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:34.824965 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:34.832995 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:34.900130 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:35.188401 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:35.328235 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:35.335563 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:35.399533 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:35.687024 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:35.826402 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:35.831733 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:35.921106 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:36.186972 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:36.324433 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:36.332856 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:36.401267 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:36.686213 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:36.825310 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:36.833508 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 04:15:36.903338 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:37.187492 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:37.328237 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:37.334629 1261998 kapi.go:107] duration metric: took 1m4.525901843s to wait for kubernetes.io/minikube-addons=registry ...
	I1128 04:15:37.399400 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:37.691347 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:37.824828 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:37.903104 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:38.188399 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:38.325225 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:38.399939 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:38.685727 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:38.824320 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:38.902065 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:39.186504 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:39.326874 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:39.401058 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:39.686758 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:39.826136 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:39.900044 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:40.186256 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:40.326721 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:40.399989 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:40.690231 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:40.824205 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:40.904027 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:41.186116 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:41.325086 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:41.399372 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:41.686162 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:41.825312 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:41.899912 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:42.187862 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:42.324764 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:42.400633 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:42.486478 1261998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:15:42.517512 1261998 api_server.go:72] duration metric: took 1m15.596517487s to wait for apiserver process to appear ...
	I1128 04:15:42.517541 1261998 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:15:42.517574 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:15:42.517648 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:15:42.615556 1261998 cri.go:89] found id: "5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2"
	I1128 04:15:42.615583 1261998 cri.go:89] found id: ""
	I1128 04:15:42.615597 1261998 logs.go:284] 1 containers: [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2]
	I1128 04:15:42.615674 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:42.621597 1261998 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:15:42.621669 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:15:42.689874 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:42.703644 1261998 cri.go:89] found id: "036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f"
	I1128 04:15:42.703670 1261998 cri.go:89] found id: ""
	I1128 04:15:42.703679 1261998 logs.go:284] 1 containers: [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f]
	I1128 04:15:42.703735 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:42.716715 1261998 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:15:42.716789 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:15:42.825630 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:42.840545 1261998 cri.go:89] found id: "bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214"
	I1128 04:15:42.840618 1261998 cri.go:89] found id: ""
	I1128 04:15:42.840649 1261998 logs.go:284] 1 containers: [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214]
	I1128 04:15:42.840765 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:42.853378 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:15:42.853526 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:15:42.901990 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:42.951942 1261998 cri.go:89] found id: "8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e"
	I1128 04:15:42.952017 1261998 cri.go:89] found id: ""
	I1128 04:15:42.952039 1261998 logs.go:284] 1 containers: [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e]
	I1128 04:15:42.952126 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:42.957565 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:15:42.957712 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:15:43.102029 1261998 cri.go:89] found id: "2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa"
	I1128 04:15:43.102099 1261998 cri.go:89] found id: ""
	I1128 04:15:43.102132 1261998 logs.go:284] 1 containers: [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa]
	I1128 04:15:43.102218 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:43.108465 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:15:43.108588 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:15:43.205974 1261998 cri.go:89] found id: "7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4"
	I1128 04:15:43.206048 1261998 cri.go:89] found id: ""
	I1128 04:15:43.206070 1261998 logs.go:284] 1 containers: [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4]
	I1128 04:15:43.206159 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:43.218646 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:15:43.218764 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:15:43.236735 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:43.315648 1261998 cri.go:89] found id: "3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8"
	I1128 04:15:43.315725 1261998 cri.go:89] found id: ""
	I1128 04:15:43.315747 1261998 logs.go:284] 1 containers: [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8]
	I1128 04:15:43.315837 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:43.336726 1261998 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:15:43.336800 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:15:43.341185 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:43.406991 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:43.615991 1261998 logs.go:123] Gathering logs for kube-controller-manager [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4] ...
	I1128 04:15:43.616062 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4"
	I1128 04:15:43.686209 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:43.743411 1261998 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:15:43.743522 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:15:43.826137 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:43.874899 1261998 logs.go:123] Gathering logs for kindnet [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8] ...
	I1128 04:15:43.874959 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8"
	I1128 04:15:43.915361 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:43.949757 1261998 logs.go:123] Gathering logs for kubelet ...
	I1128 04:15:43.949936 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1128 04:15:44.059780 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324070    1341 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:44.059996 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324111    1341 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:44.060182 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324111    1341 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:44.060384 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324137    1341 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	I1128 04:15:44.111077 1261998 logs.go:123] Gathering logs for dmesg ...
	I1128 04:15:44.111119 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:15:44.144702 1261998 logs.go:123] Gathering logs for kube-apiserver [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2] ...
	I1128 04:15:44.144742 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2"
	I1128 04:15:44.186513 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:44.262098 1261998 logs.go:123] Gathering logs for etcd [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f] ...
	I1128 04:15:44.262136 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f"
	I1128 04:15:44.325137 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:44.327025 1261998 logs.go:123] Gathering logs for coredns [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214] ...
	I1128 04:15:44.327046 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214"
	I1128 04:15:44.399867 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:44.405709 1261998 logs.go:123] Gathering logs for kube-scheduler [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e] ...
	I1128 04:15:44.405747 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e"
	I1128 04:15:44.472207 1261998 logs.go:123] Gathering logs for kube-proxy [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa] ...
	I1128 04:15:44.472246 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa"
	I1128 04:15:44.521998 1261998 logs.go:123] Gathering logs for container status ...
	I1128 04:15:44.522029 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:15:44.577100 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:15:44.577133 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1128 04:15:44.577213 1261998 out.go:239] X Problems detected in kubelet:
	W1128 04:15:44.577227 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324070    1341 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:44.577235 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324111    1341 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:44.577243 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324111    1341 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:44.577263 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324137    1341 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	I1128 04:15:44.577271 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:15:44.577278 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:15:44.686602 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:44.823800 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:44.899566 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 04:15:45.190684 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:45.327479 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:45.401377 1261998 kapi.go:107] duration metric: took 1m8.590462101s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1128 04:15:45.406209 1261998 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-663058 cluster.
	I1128 04:15:45.408331 1261998 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1128 04:15:45.410139 1261998 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1128 04:15:45.686606 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:45.824288 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:46.186156 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:46.324814 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:46.685668 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:46.824372 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:47.185486 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:47.325044 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:47.693749 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:47.830498 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:48.185860 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:48.324187 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:48.686365 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:48.824877 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:49.185811 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:49.323924 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:49.685819 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:49.824553 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:50.185341 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:50.325441 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:50.685405 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:50.824750 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:51.188378 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:51.324630 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:51.694157 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:51.824791 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:52.186392 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:52.325318 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:52.695285 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:52.827934 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:53.186402 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:53.324207 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:53.686785 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:53.823866 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:54.186881 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:54.325082 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:54.578504 1261998 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1128 04:15:54.591307 1261998 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1128 04:15:54.592606 1261998 api_server.go:141] control plane version: v1.28.4
	I1128 04:15:54.592630 1261998 api_server.go:131] duration metric: took 12.075082107s to wait for apiserver health ...
	I1128 04:15:54.592640 1261998 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:15:54.592683 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:15:54.592741 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:15:54.659213 1261998 cri.go:89] found id: "5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2"
	I1128 04:15:54.659238 1261998 cri.go:89] found id: ""
	I1128 04:15:54.659249 1261998 logs.go:284] 1 containers: [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2]
	I1128 04:15:54.659306 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:54.666343 1261998 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:15:54.666416 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:15:54.695438 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:54.747392 1261998 cri.go:89] found id: "036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f"
	I1128 04:15:54.747419 1261998 cri.go:89] found id: ""
	I1128 04:15:54.747428 1261998 logs.go:284] 1 containers: [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f]
	I1128 04:15:54.747485 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:54.755636 1261998 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:15:54.755710 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:15:54.824919 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:54.841402 1261998 cri.go:89] found id: "bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214"
	I1128 04:15:54.841427 1261998 cri.go:89] found id: ""
	I1128 04:15:54.841436 1261998 logs.go:284] 1 containers: [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214]
	I1128 04:15:54.841490 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:54.849443 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:15:54.849540 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:15:54.907542 1261998 cri.go:89] found id: "8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e"
	I1128 04:15:54.907567 1261998 cri.go:89] found id: ""
	I1128 04:15:54.907575 1261998 logs.go:284] 1 containers: [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e]
	I1128 04:15:54.907635 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:54.916302 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:15:54.916389 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:15:54.980686 1261998 cri.go:89] found id: "2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa"
	I1128 04:15:54.980709 1261998 cri.go:89] found id: ""
	I1128 04:15:54.980717 1261998 logs.go:284] 1 containers: [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa]
	I1128 04:15:54.980770 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:54.985600 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:15:54.985679 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:15:55.053544 1261998 cri.go:89] found id: "7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4"
	I1128 04:15:55.053567 1261998 cri.go:89] found id: ""
	I1128 04:15:55.053577 1261998 logs.go:284] 1 containers: [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4]
	I1128 04:15:55.053651 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:55.062541 1261998 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:15:55.062622 1261998 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:15:55.116211 1261998 cri.go:89] found id: "3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8"
	I1128 04:15:55.116236 1261998 cri.go:89] found id: ""
	I1128 04:15:55.116246 1261998 logs.go:284] 1 containers: [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8]
	I1128 04:15:55.116306 1261998 ssh_runner.go:195] Run: which crictl
	I1128 04:15:55.122037 1261998 logs.go:123] Gathering logs for dmesg ...
	I1128 04:15:55.122066 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:15:55.147090 1261998 logs.go:123] Gathering logs for kube-scheduler [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e] ...
	I1128 04:15:55.147121 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e"
	I1128 04:15:55.186862 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:55.221287 1261998 logs.go:123] Gathering logs for kube-proxy [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa] ...
	I1128 04:15:55.221328 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa"
	I1128 04:15:55.288427 1261998 logs.go:123] Gathering logs for kube-controller-manager [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4] ...
	I1128 04:15:55.288504 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4"
	I1128 04:15:55.325015 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:55.412045 1261998 logs.go:123] Gathering logs for kindnet [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8] ...
	I1128 04:15:55.412092 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8"
	I1128 04:15:55.460191 1261998 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:15:55.460221 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:15:55.566154 1261998 logs.go:123] Gathering logs for kubelet ...
	I1128 04:15:55.566191 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1128 04:15:55.622704 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324070    1341 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:55.622975 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324111    1341 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:55.623185 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324111    1341 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:55.623418 1261998 logs.go:138] Found kubelet problem: Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324137    1341 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	I1128 04:15:55.670717 1261998 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:15:55.670768 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:15:55.689473 1261998 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 04:15:55.825295 1261998 logs.go:123] Gathering logs for kube-apiserver [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2] ...
	I1128 04:15:55.825364 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2"
	I1128 04:15:55.830487 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:55.906152 1261998 logs.go:123] Gathering logs for etcd [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f] ...
	I1128 04:15:55.906188 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f"
	I1128 04:15:55.969538 1261998 logs.go:123] Gathering logs for coredns [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214] ...
	I1128 04:15:55.969639 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214"
	I1128 04:15:56.052957 1261998 logs.go:123] Gathering logs for container status ...
	I1128 04:15:56.052995 1261998 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:15:56.112483 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:15:56.112514 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1128 04:15:56.112723 1261998 out.go:239] X Problems detected in kubelet:
	W1128 04:15:56.112739 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324070    1341 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:56.112757 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324111    1341 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:56.112904 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: W1128 04:14:26.324111    1341 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	W1128 04:15:56.112929 1261998 out.go:239]   Nov 28 04:14:26 addons-663058 kubelet[1341]: E1128 04:14:26.324137    1341 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-663058" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-663058' and this object
	I1128 04:15:56.112943 1261998 out.go:309] Setting ErrFile to fd 2...
	I1128 04:15:56.112956 1261998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:15:56.186606 1261998 kapi.go:107] duration metric: took 1m23.041172774s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1128 04:15:56.324151 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:56.824097 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:57.324737 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:57.824540 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:58.324742 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:58.824624 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:59.324413 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:15:59.824711 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:00.346691 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:00.825812 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:01.324870 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:01.824927 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:02.324903 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:02.824424 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:03.325197 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:03.828069 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:04.324850 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:04.826475 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:05.324630 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:05.831177 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:06.134379 1261998 system_pods.go:59] 18 kube-system pods found
	I1128 04:16:06.134420 1261998 system_pods.go:61] "coredns-5dd5756b68-fv8lf" [57a483ef-e493-4157-936d-6409856e5c07] Running
	I1128 04:16:06.134427 1261998 system_pods.go:61] "csi-hostpath-attacher-0" [5b9679c2-2586-46de-9de4-19a638f85690] Running
	I1128 04:16:06.134434 1261998 system_pods.go:61] "csi-hostpath-resizer-0" [a6abd974-6cbb-4657-bf41-339165e77029] Running
	I1128 04:16:06.134440 1261998 system_pods.go:61] "csi-hostpathplugin-hzznq" [f79fc35e-9181-4cde-9a71-3abaef3d9036] Running
	I1128 04:16:06.134445 1261998 system_pods.go:61] "etcd-addons-663058" [e7332fa6-fb01-46d2-b4e2-6cb0313464ef] Running
	I1128 04:16:06.134450 1261998 system_pods.go:61] "kindnet-rqksn" [1add52bc-1a3d-46cd-9a70-af55fea15e55] Running
	I1128 04:16:06.134456 1261998 system_pods.go:61] "kube-apiserver-addons-663058" [9f155ad3-17e0-4aa8-a8cb-503789142411] Running
	I1128 04:16:06.134462 1261998 system_pods.go:61] "kube-controller-manager-addons-663058" [89f6895e-bf51-48cd-839e-b8d43c090005] Running
	I1128 04:16:06.134471 1261998 system_pods.go:61] "kube-ingress-dns-minikube" [f49c6c00-2204-4a6d-b2f8-81f3c417d31b] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1128 04:16:06.134483 1261998 system_pods.go:61] "kube-proxy-lddbr" [c9a4dea9-5590-4107-bca6-3c1ecc1dce2b] Running
	I1128 04:16:06.134491 1261998 system_pods.go:61] "kube-scheduler-addons-663058" [ebc2bb6d-7a61-4dad-883d-2e084dc956d0] Running
	I1128 04:16:06.134501 1261998 system_pods.go:61] "metrics-server-7c66d45ddc-xqlzx" [86203f39-30e9-4edc-8374-9cc756336a40] Running
	I1128 04:16:06.134508 1261998 system_pods.go:61] "nvidia-device-plugin-daemonset-5rx4b" [c5901cae-07eb-478c-8959-5d32467d77ac] Running
	I1128 04:16:06.134513 1261998 system_pods.go:61] "registry-fxxqk" [d49b47c4-18b1-4fec-8b15-184bb5ff000a] Running
	I1128 04:16:06.134520 1261998 system_pods.go:61] "registry-proxy-79xtf" [d52cab37-0553-4eb5-b573-7796b189da95] Running
	I1128 04:16:06.134529 1261998 system_pods.go:61] "snapshot-controller-58dbcc7b99-5c7fc" [ba47a359-6753-4149-ad8f-32eec46b5155] Running
	I1128 04:16:06.134534 1261998 system_pods.go:61] "snapshot-controller-58dbcc7b99-tdnhp" [5e5d18fb-0b0e-47e9-9570-ad1f09c2ee23] Running
	I1128 04:16:06.134539 1261998 system_pods.go:61] "storage-provisioner" [2be2e7a3-1b20-46be-963a-5750afae5c36] Running
	I1128 04:16:06.134545 1261998 system_pods.go:74] duration metric: took 11.541899443s to wait for pod list to return data ...
	I1128 04:16:06.134554 1261998 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:16:06.137352 1261998 default_sa.go:45] found service account: "default"
	I1128 04:16:06.137381 1261998 default_sa.go:55] duration metric: took 2.819661ms for default service account to be created ...
	I1128 04:16:06.137392 1261998 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:16:06.149605 1261998 system_pods.go:86] 18 kube-system pods found
	I1128 04:16:06.149643 1261998 system_pods.go:89] "coredns-5dd5756b68-fv8lf" [57a483ef-e493-4157-936d-6409856e5c07] Running
	I1128 04:16:06.149652 1261998 system_pods.go:89] "csi-hostpath-attacher-0" [5b9679c2-2586-46de-9de4-19a638f85690] Running
	I1128 04:16:06.149657 1261998 system_pods.go:89] "csi-hostpath-resizer-0" [a6abd974-6cbb-4657-bf41-339165e77029] Running
	I1128 04:16:06.149662 1261998 system_pods.go:89] "csi-hostpathplugin-hzznq" [f79fc35e-9181-4cde-9a71-3abaef3d9036] Running
	I1128 04:16:06.149667 1261998 system_pods.go:89] "etcd-addons-663058" [e7332fa6-fb01-46d2-b4e2-6cb0313464ef] Running
	I1128 04:16:06.149672 1261998 system_pods.go:89] "kindnet-rqksn" [1add52bc-1a3d-46cd-9a70-af55fea15e55] Running
	I1128 04:16:06.149678 1261998 system_pods.go:89] "kube-apiserver-addons-663058" [9f155ad3-17e0-4aa8-a8cb-503789142411] Running
	I1128 04:16:06.149687 1261998 system_pods.go:89] "kube-controller-manager-addons-663058" [89f6895e-bf51-48cd-839e-b8d43c090005] Running
	I1128 04:16:06.149696 1261998 system_pods.go:89] "kube-ingress-dns-minikube" [f49c6c00-2204-4a6d-b2f8-81f3c417d31b] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1128 04:16:06.149707 1261998 system_pods.go:89] "kube-proxy-lddbr" [c9a4dea9-5590-4107-bca6-3c1ecc1dce2b] Running
	I1128 04:16:06.149715 1261998 system_pods.go:89] "kube-scheduler-addons-663058" [ebc2bb6d-7a61-4dad-883d-2e084dc956d0] Running
	I1128 04:16:06.149720 1261998 system_pods.go:89] "metrics-server-7c66d45ddc-xqlzx" [86203f39-30e9-4edc-8374-9cc756336a40] Running
	I1128 04:16:06.149730 1261998 system_pods.go:89] "nvidia-device-plugin-daemonset-5rx4b" [c5901cae-07eb-478c-8959-5d32467d77ac] Running
	I1128 04:16:06.149736 1261998 system_pods.go:89] "registry-fxxqk" [d49b47c4-18b1-4fec-8b15-184bb5ff000a] Running
	I1128 04:16:06.149742 1261998 system_pods.go:89] "registry-proxy-79xtf" [d52cab37-0553-4eb5-b573-7796b189da95] Running
	I1128 04:16:06.149747 1261998 system_pods.go:89] "snapshot-controller-58dbcc7b99-5c7fc" [ba47a359-6753-4149-ad8f-32eec46b5155] Running
	I1128 04:16:06.149756 1261998 system_pods.go:89] "snapshot-controller-58dbcc7b99-tdnhp" [5e5d18fb-0b0e-47e9-9570-ad1f09c2ee23] Running
	I1128 04:16:06.149761 1261998 system_pods.go:89] "storage-provisioner" [2be2e7a3-1b20-46be-963a-5750afae5c36] Running
	I1128 04:16:06.149768 1261998 system_pods.go:126] duration metric: took 12.370822ms to wait for k8s-apps to be running ...
	I1128 04:16:06.149789 1261998 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:16:06.149845 1261998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:16:06.166195 1261998 system_svc.go:56] duration metric: took 16.396646ms WaitForService to wait for kubelet.
	I1128 04:16:06.166224 1261998 kubeadm.go:581] duration metric: took 1m39.24523622s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:16:06.166244 1261998 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:16:06.169866 1261998 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:16:06.169903 1261998 node_conditions.go:123] node cpu capacity is 2
	I1128 04:16:06.169917 1261998 node_conditions.go:105] duration metric: took 3.666795ms to run NodePressure ...
	I1128 04:16:06.169930 1261998 start.go:228] waiting for startup goroutines ...
	I1128 04:16:06.324999 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:06.823847 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:07.324510 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:07.826455 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:08.324395 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:08.824311 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:09.324547 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:09.825212 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:10.331212 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:10.824681 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:11.324785 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:11.824851 1261998 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 04:16:12.325184 1261998 kapi.go:107] duration metric: took 1m39.523075093s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1128 04:16:12.327297 1261998 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1128 04:16:12.329378 1261998 addons.go:502] enable addons completed in 1m45.750836389s: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1128 04:16:12.329461 1261998 start.go:233] waiting for cluster config update ...
	I1128 04:16:12.329505 1261998 start.go:242] writing updated cluster config ...
	I1128 04:16:12.329843 1261998 ssh_runner.go:195] Run: rm -f paused
	I1128 04:16:12.631507 1261998 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:16:12.633827 1261998 out.go:177] * Done! kubectl is now configured to use "addons-663058" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 28 04:19:44 addons-663058 crio[876]: time="2023-11-28 04:19:44.143911086Z" level=info msg="Created container 3f6917d579c3b90530d5a3134c6e024a69ed262422851dc14020167f572978e8: default/hello-world-app-5d77478584-zgzbc/hello-world-app" id=4b715548-5f7f-4434-b821-ec2a4a3aa036 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:19:44 addons-663058 crio[876]: time="2023-11-28 04:19:44.144896065Z" level=info msg="Starting container: 3f6917d579c3b90530d5a3134c6e024a69ed262422851dc14020167f572978e8" id=f9e4bf34-b635-4390-86f7-eed8faae4b51 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:19:44 addons-663058 conmon[9092]: conmon 3f6917d579c3b90530d5 <ninfo>: container 9108 exited with status 1
	Nov 28 04:19:44 addons-663058 crio[876]: time="2023-11-28 04:19:44.170556787Z" level=info msg="Started container" PID=9108 containerID=3f6917d579c3b90530d5a3134c6e024a69ed262422851dc14020167f572978e8 description=default/hello-world-app-5d77478584-zgzbc/hello-world-app id=f9e4bf34-b635-4390-86f7-eed8faae4b51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a3d0752e6d33702e54e3d09169acc701313afe1029fcd34192574f006e591cf
	Nov 28 04:19:44 addons-663058 crio[876]: time="2023-11-28 04:19:44.384246633Z" level=info msg="Stopping pod sandbox: 112d2656a3805da0b52009272f68ff381fedbd210862c6adf18a1ce42cbfe55e" id=bc0e8caa-6487-48c1-a0d9-b688aa012e4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 28 04:19:44 addons-663058 crio[876]: time="2023-11-28 04:19:44.385850267Z" level=info msg="Stopped pod sandbox: 112d2656a3805da0b52009272f68ff381fedbd210862c6adf18a1ce42cbfe55e" id=bc0e8caa-6487-48c1-a0d9-b688aa012e4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 28 04:19:44 addons-663058 crio[876]: time="2023-11-28 04:19:44.979863887Z" level=info msg="Removing container: 2563d949f6bb77f1cd4f4e1416018fb09b58e6a0aa10c59b3e20a59c45ef2a86" id=7ea72e6a-4da1-4976-adb1-3309d6f963ce name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 28 04:19:45 addons-663058 crio[876]: time="2023-11-28 04:19:45.044647678Z" level=info msg="Removed container 2563d949f6bb77f1cd4f4e1416018fb09b58e6a0aa10c59b3e20a59c45ef2a86: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=7ea72e6a-4da1-4976-adb1-3309d6f963ce name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 28 04:19:45 addons-663058 crio[876]: time="2023-11-28 04:19:45.068166577Z" level=info msg="Removing container: a490b84bde55946ceac9e9998558c6f0416e8f803c40977aded7c183ce216572" id=7231605e-9446-4b62-9cd6-92032b4711de name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 28 04:19:45 addons-663058 crio[876]: time="2023-11-28 04:19:45.127200786Z" level=info msg="Removed container a490b84bde55946ceac9e9998558c6f0416e8f803c40977aded7c183ce216572: default/hello-world-app-5d77478584-zgzbc/hello-world-app" id=7231605e-9446-4b62-9cd6-92032b4711de name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 28 04:19:46 addons-663058 crio[876]: time="2023-11-28 04:19:46.907374670Z" level=info msg="Stopping container: 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028 (timeout: 2s)" id=98a145a5-bd9f-49b7-ada1-3bf6aafbb22f name=/runtime.v1.RuntimeService/StopContainer
	Nov 28 04:19:48 addons-663058 crio[876]: time="2023-11-28 04:19:48.917170772Z" level=warning msg="Stopping container 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=98a145a5-bd9f-49b7-ada1-3bf6aafbb22f name=/runtime.v1.RuntimeService/StopContainer
	Nov 28 04:19:48 addons-663058 conmon[5938]: conmon 5a18f4cab6d0363e0030 <ninfo>: container 5949 exited with status 137
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.083740587Z" level=info msg="Stopped container 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028: ingress-nginx/ingress-nginx-controller-7c6974c4d8-gll68/controller" id=98a145a5-bd9f-49b7-ada1-3bf6aafbb22f name=/runtime.v1.RuntimeService/StopContainer
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.084436370Z" level=info msg="Stopping pod sandbox: a35c3d7739ecac6fe5fed55d2f0bdb69ba16de2f4b46155e9162e75568139f5e" id=1ffc1795-fbd4-4548-9ef0-07a7a3448722 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.088411529Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-NVG7OX7UYXQU5LCV - [0:0]\n:KUBE-HP-6LRF2BZRPZUMI2OF - [0:0]\n-X KUBE-HP-6LRF2BZRPZUMI2OF\n-X KUBE-HP-NVG7OX7UYXQU5LCV\nCOMMIT\n"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.090160730Z" level=info msg="Closing host port tcp:80"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.090212373Z" level=info msg="Closing host port tcp:443"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.091861398Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.091894185Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.092076831Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-gll68 Namespace:ingress-nginx ID:a35c3d7739ecac6fe5fed55d2f0bdb69ba16de2f4b46155e9162e75568139f5e UID:8ea1dc22-b81a-473f-bc92-56090f53a7b9 NetNS:/var/run/netns/2d3fc9f3-3eb4-4ad3-95c8-3d6c9c3af939 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.092220805Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-gll68 from CNI network \"kindnet\" (type=ptp)"
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.118365078Z" level=info msg="Stopped pod sandbox: a35c3d7739ecac6fe5fed55d2f0bdb69ba16de2f4b46155e9162e75568139f5e" id=1ffc1795-fbd4-4548-9ef0-07a7a3448722 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 28 04:19:49 addons-663058 crio[876]: time="2023-11-28 04:19:49.995659087Z" level=info msg="Removing container: 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028" id=226d535b-e8c5-4019-8325-2cf5fe61e1bd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 28 04:19:50 addons-663058 crio[876]: time="2023-11-28 04:19:50.029389470Z" level=info msg="Removed container 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028: ingress-nginx/ingress-nginx-controller-7c6974c4d8-gll68/controller" id=226d535b-e8c5-4019-8325-2cf5fe61e1bd name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f6917d579c3b       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             10 seconds ago      Exited              hello-world-app           2                   2a3d0752e6d33       hello-world-app-5d77478584-zgzbc
	f6cdc8f02b5b8       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                              2 minutes ago       Running             nginx                     0                   dbb5b546c1fc9       nginx
	ce109bba7a574       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9                        2 minutes ago       Running             headlamp                  0                   f876529884ee0       headlamp-777fd4b855-mt76q
	78f57653f2ff0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 4 minutes ago       Running             gcp-auth                  0                   6aa5ff38874e1       gcp-auth-d4c87556c-z9429
	4d470d590134a       af594c6a879f2e441ea446a122296abbbe11aae5547e780f2582fbcda5df271c                                                             4 minutes ago       Exited              patch                     2                   e136fa09ca742       ingress-nginx-admission-patch-jpd7k
	df9a18e9b7a5e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              create                    0                   a846d72d92a24       ingress-nginx-admission-create-qghgt
	60f1b92a5551c       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             4 minutes ago       Running             local-path-provisioner    0                   9f2dd47fae543       local-path-provisioner-78b46b4d5c-fznrp
	bd3caffa8bb44       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   c18bfe6bfa9f3       coredns-5dd5756b68-fv8lf
	709553cc3b60a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   44e63b3317813       storage-provisioner
	3dc4992b65423       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   17e8161a7f51f       kindnet-rqksn
	2b4edc9f6480a       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                                             5 minutes ago       Running             kube-proxy                0                   741f07ad9cf2e       kube-proxy-lddbr
	5e884c00ee8a5       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                                             5 minutes ago       Running             kube-apiserver            0                   45f880cab21f1       kube-apiserver-addons-663058
	7268378311f58       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                                             5 minutes ago       Running             kube-controller-manager   0                   7b28045a9090b       kube-controller-manager-addons-663058
	8fcedf245f283       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                                             5 minutes ago       Running             kube-scheduler            0                   6f59d2e4627fd       kube-scheduler-addons-663058
	036c8cade2cc9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   39627d5ccebd9       etcd-addons-663058
	
	* 
	* ==> coredns [bd3caffa8bb447a980cccda87d267cb5e811ccca618fe19d5256463c17c5b214] <==
	* [INFO] 10.244.0.19:33289 - 19660 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070211s
	[INFO] 10.244.0.19:33289 - 15286 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062695s
	[INFO] 10.244.0.19:33289 - 12269 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060709s
	[INFO] 10.244.0.19:33289 - 47263 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069678s
	[INFO] 10.244.0.19:33289 - 33669 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000979712s
	[INFO] 10.244.0.19:33289 - 31920 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000911577s
	[INFO] 10.244.0.19:33289 - 5235 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078154s
	[INFO] 10.244.0.19:36753 - 42865 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119508s
	[INFO] 10.244.0.19:46027 - 8514 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00030015s
	[INFO] 10.244.0.19:46027 - 63364 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087245s
	[INFO] 10.244.0.19:46027 - 54097 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062039s
	[INFO] 10.244.0.19:46027 - 38739 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057804s
	[INFO] 10.244.0.19:46027 - 25446 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058625s
	[INFO] 10.244.0.19:46027 - 61271 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043889s
	[INFO] 10.244.0.19:46027 - 44307 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001418298s
	[INFO] 10.244.0.19:36753 - 6327 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000171355s
	[INFO] 10.244.0.19:36753 - 15079 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045833s
	[INFO] 10.244.0.19:46027 - 23192 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00123032s
	[INFO] 10.244.0.19:36753 - 7281 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066872s
	[INFO] 10.244.0.19:46027 - 59098 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006025s
	[INFO] 10.244.0.19:36753 - 25977 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068799s
	[INFO] 10.244.0.19:36753 - 7963 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071934s
	[INFO] 10.244.0.19:36753 - 8298 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001075408s
	[INFO] 10.244.0.19:36753 - 30052 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001310204s
	[INFO] 10.244.0.19:36753 - 18833 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071442s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-663058
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-663058
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=addons-663058
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_14_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-663058
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:14:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-663058
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:19:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:19:49 +0000   Tue, 28 Nov 2023 04:14:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:19:49 +0000   Tue, 28 Nov 2023 04:14:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:19:49 +0000   Tue, 28 Nov 2023 04:14:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:19:49 +0000   Tue, 28 Nov 2023 04:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-663058
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6d8c00a59c54c0ba1afa9423884eae2
	  System UUID:                bbc88841-6d18-4119-a320-ac4fc7024d19
	  Boot ID:                    29ce650a-e22a-4e0d-bffe-126490eafcf6
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-zgzbc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-z9429                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  headlamp                    headlamp-777fd4b855-mt76q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 coredns-5dd5756b68-fv8lf                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m28s
	  kube-system                 etcd-addons-663058                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m41s
	  kube-system                 kindnet-rqksn                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m28s
	  kube-system                 kube-apiserver-addons-663058               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-controller-manager-addons-663058      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-proxy-lddbr                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-scheduler-addons-663058               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  local-path-storage          local-path-provisioner-78b46b4d5c-fznrp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  Starting                 5m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m49s (x8 over 5m49s)  kubelet          Node addons-663058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m49s (x8 over 5m49s)  kubelet          Node addons-663058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x8 over 5m49s)  kubelet          Node addons-663058 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s                  kubelet          Node addons-663058 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s                  kubelet          Node addons-663058 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s                  kubelet          Node addons-663058 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m28s                  node-controller  Node addons-663058 event: Registered Node addons-663058 in Controller
	  Normal  NodeReady                4m55s                  kubelet          Node addons-663058 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001120] FS-Cache: O-key=[8] 'a83f5c0100000000'
	[  +0.000756] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001014] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008d9cd212
	[  +0.001122] FS-Cache: N-key=[8] 'a83f5c0100000000'
	[  +0.002755] FS-Cache: Duplicate cookie detected
	[  +0.000776] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001033] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=0000000011f38211
	[  +0.001110] FS-Cache: O-key=[8] 'a83f5c0100000000'
	[  +0.000743] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001000] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000002669129b
	[  +0.001100] FS-Cache: N-key=[8] 'a83f5c0100000000'
	[  +2.765249] FS-Cache: Duplicate cookie detected
	[  +0.000907] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001267] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=00000000f5205eb5
	[  +0.001148] FS-Cache: O-key=[8] 'a73f5c0100000000'
	[  +0.000830] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001133] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=00000000e23c0dcf
	[  +0.001148] FS-Cache: N-key=[8] 'a73f5c0100000000'
	[  +0.411825] FS-Cache: Duplicate cookie detected
	[  +0.001128] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001152] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=00000000bc6ab90a
	[  +0.005667] FS-Cache: O-key=[8] 'ad3f5c0100000000'
	[  +0.000830] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001029] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008d9cd212
	[  +0.001153] FS-Cache: N-key=[8] 'ad3f5c0100000000'
	
	* 
	* ==> etcd [036c8cade2cc910399e51718c86f9474a40a0f5565f48c5702b9e63bb747995f] <==
	* {"level":"info","ts":"2023-11-28T04:14:06.734011Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-28T04:14:06.734384Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:14:06.735224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:14:06.744725Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:14:06.744773Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T04:14:06.746458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:14:06.746695Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:14:06.761856Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2023-11-28T04:14:27.331331Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.905624ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025445600773773 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:353 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:120 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T04:14:27.331413Z","caller":"traceutil/trace.go:171","msg":"trace[1459849859] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"308.569261ms","start":"2023-11-28T04:14:27.022833Z","end":"2023-11-28T04:14:27.331402Z","steps":["trace[1459849859] 'process raft request'  (duration: 42.197785ms)","trace[1459849859] 'compare'  (duration: 265.819331ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T04:14:27.331452Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:14:27.022815Z","time spent":"308.618286ms","remote":"127.0.0.1:38698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":168,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:353 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:120 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >"}
	{"level":"info","ts":"2023-11-28T04:14:28.324462Z","caller":"traceutil/trace.go:171","msg":"trace[1728547988] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"185.102124ms","start":"2023-11-28T04:14:27.556324Z","end":"2023-11-28T04:14:27.741426Z","steps":["trace[1728547988] 'process raft request'  (duration: 181.162724ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:14:28.324645Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:14:27.556297Z","time spent":"768.251506ms","remote":"127.0.0.1:38944","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4081,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:395 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4032 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2023-11-28T04:14:28.518124Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.123988ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128025445600773778 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-lddbr.179bae3e77094469\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-lddbr.179bae3e77094469\" value_size:684 lease:8128025445600773362 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T04:14:28.518393Z","caller":"traceutil/trace.go:171","msg":"trace[1758886506] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"297.153553ms","start":"2023-11-28T04:14:28.221216Z","end":"2023-11-28T04:14:28.518369Z","steps":["trace[1758886506] 'process raft request'  (duration: 103.715482ms)","trace[1758886506] 'compare'  (duration: 176.841132ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-28T04:14:28.518605Z","caller":"traceutil/trace.go:171","msg":"trace[2071696165] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"102.819905ms","start":"2023-11-28T04:14:28.415777Z","end":"2023-11-28T04:14:28.518597Z","steps":["trace[2071696165] 'process raft request'  (duration: 102.424608ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:14:28.593129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.590322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2023-11-28T04:14:28.593196Z","caller":"traceutil/trace.go:171","msg":"trace[1938389933] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:403; }","duration":"115.666235ms","start":"2023-11-28T04:14:28.477517Z","end":"2023-11-28T04:14:28.593183Z","steps":["trace[1938389933] 'agreement among raft nodes before linearized reading'  (duration: 77.27304ms)","trace[1938389933] 'get authentication metadata'  (duration: 38.284896ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T04:14:28.665385Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.125197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T04:14:28.665517Z","caller":"traceutil/trace.go:171","msg":"trace[263427895] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:403; }","duration":"164.287025ms","start":"2023-11-28T04:14:28.501215Z","end":"2023-11-28T04:14:28.665502Z","steps":["trace[263427895] 'agreement among raft nodes before linearized reading'  (duration: 91.513825ms)","trace[263427895] 'get authentication metadata'  (duration: 17.055653ms)","trace[263427895] 'range keys from in-memory index tree'  (duration: 55.541483ms)"],"step_count":3}
	{"level":"warn","ts":"2023-11-28T04:14:29.894741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.483673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T04:14:29.894803Z","caller":"traceutil/trace.go:171","msg":"trace[329495541] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:407; }","duration":"145.551078ms","start":"2023-11-28T04:14:29.749237Z","end":"2023-11-28T04:14:29.894788Z","steps":["trace[329495541] 'range keys from in-memory index tree'  (duration: 145.399727ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:14:29.894934Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.765855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T04:14:29.894957Z","caller":"traceutil/trace.go:171","msg":"trace[156074830] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:407; }","duration":"145.789419ms","start":"2023-11-28T04:14:29.749162Z","end":"2023-11-28T04:14:29.894951Z","steps":["trace[156074830] 'range keys from in-memory index tree'  (duration: 145.59879ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T04:17:00.286479Z","caller":"traceutil/trace.go:171","msg":"trace[1328803189] transaction","detail":"{read_only:false; response_revision:1596; number_of_response:1; }","duration":"109.556625ms","start":"2023-11-28T04:17:00.176902Z","end":"2023-11-28T04:17:00.286459Z","steps":["trace[1328803189] 'process raft request'  (duration: 106.389348ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [78f57653f2ff050fddafe8581ad8ad990ebceaeb7071dd0fbd2e42a98c47e5ad] <==
	* 2023/11/28 04:15:44 GCP Auth Webhook started!
	2023/11/28 04:16:18 Ready to marshal response ...
	2023/11/28 04:16:18 Ready to write response ...
	2023/11/28 04:16:23 Ready to marshal response ...
	2023/11/28 04:16:23 Ready to write response ...
	2023/11/28 04:16:34 Ready to marshal response ...
	2023/11/28 04:16:34 Ready to write response ...
	2023/11/28 04:16:34 Ready to marshal response ...
	2023/11/28 04:16:34 Ready to write response ...
	2023/11/28 04:16:42 Ready to marshal response ...
	2023/11/28 04:16:42 Ready to write response ...
	2023/11/28 04:16:43 Ready to marshal response ...
	2023/11/28 04:16:43 Ready to write response ...
	2023/11/28 04:16:49 Ready to marshal response ...
	2023/11/28 04:16:49 Ready to write response ...
	2023/11/28 04:16:50 Ready to marshal response ...
	2023/11/28 04:16:50 Ready to write response ...
	2023/11/28 04:16:50 Ready to marshal response ...
	2023/11/28 04:16:50 Ready to write response ...
	2023/11/28 04:17:07 Ready to marshal response ...
	2023/11/28 04:17:07 Ready to write response ...
	2023/11/28 04:19:28 Ready to marshal response ...
	2023/11/28 04:19:28 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  04:19:54 up  7:02,  0 users,  load average: 0.83, 1.56, 2.31
	Linux addons-663058 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3dc4992b65423e14149d949555da18f018acfab601367f9596af93f22301ccd8] <==
	* I1128 04:17:49.559161       1 main.go:227] handling current node
	I1128 04:17:59.564140       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:17:59.564167       1 main.go:227] handling current node
	I1128 04:18:09.568816       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:18:09.568844       1 main.go:227] handling current node
	I1128 04:18:19.572847       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:18:19.572875       1 main.go:227] handling current node
	I1128 04:18:29.576988       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:18:29.577015       1 main.go:227] handling current node
	I1128 04:18:39.580804       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:18:39.580834       1 main.go:227] handling current node
	I1128 04:18:49.586940       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:18:49.586970       1 main.go:227] handling current node
	I1128 04:18:59.590933       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:18:59.590961       1 main.go:227] handling current node
	I1128 04:19:09.602831       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:19:09.602864       1 main.go:227] handling current node
	I1128 04:19:19.606619       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:19:19.606644       1 main.go:227] handling current node
	I1128 04:19:29.616429       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:19:29.616460       1 main.go:227] handling current node
	I1128 04:19:39.620088       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:19:39.620119       1 main.go:227] handling current node
	I1128 04:19:49.631400       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:19:49.631430       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5e884c00ee8a5047810358d794c1675eed5d5886c446f31644a894e406d18db2] <==
	* I1128 04:17:00.084685       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.134064       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.135438       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.157348       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.157981       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.224007       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.224081       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.300297       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.300653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.326588       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.326659       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.340576       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.340627       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 04:17:00.368798       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 04:17:00.368864       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1128 04:17:01.327326       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1128 04:17:01.341017       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1128 04:17:01.431614       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1128 04:17:07.435296       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1128 04:17:07.881765       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.101.94"}
	I1128 04:17:08.772257       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1128 04:17:08.790031       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1128 04:17:09.815272       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1128 04:17:31.168192       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1128 04:19:28.449084       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.240.22"}
	
	* 
	* ==> kube-controller-manager [7268378311f5823fd05dc544ccc075d4a3c9a185123679e9d0dad9c7665cfee4] <==
	* E1128 04:18:14.713071       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 04:18:39.977948       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 04:18:39.977983       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 04:18:57.394890       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 04:18:57.394927       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 04:19:10.369991       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 04:19:10.370024       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 04:19:10.903558       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 04:19:10.903592       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 04:19:16.187460       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 04:19:16.187497       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1128 04:19:28.192816       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1128 04:19:28.221606       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-zgzbc"
	I1128 04:19:28.231246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="39.044438ms"
	I1128 04:19:28.248737       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="17.368114ms"
	I1128 04:19:28.248885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="34.781µs"
	I1128 04:19:30.966662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.059µs"
	I1128 04:19:31.968552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="112.927µs"
	I1128 04:19:32.966394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.097µs"
	W1128 04:19:37.307117       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 04:19:37.307153       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1128 04:19:45.030903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="185.935µs"
	I1128 04:19:45.861068       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1128 04:19:45.868026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="37.284µs"
	I1128 04:19:45.869896       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	* 
	* ==> kube-proxy [2b4edc9f6480aab982a12acad2d3575de90c8d6b4dc0c43fc199134241c5ddaa] <==
	* I1128 04:14:32.011117       1 server_others.go:69] "Using iptables proxy"
	I1128 04:14:32.148092       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1128 04:14:32.281498       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1128 04:14:32.284943       1 server_others.go:152] "Using iptables Proxier"
	I1128 04:14:32.285012       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1128 04:14:32.285024       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1128 04:14:32.285098       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:14:32.285290       1 server.go:846] "Version info" version="v1.28.4"
	I1128 04:14:32.285307       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:14:32.287190       1 config.go:188] "Starting service config controller"
	I1128 04:14:32.287213       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:14:32.287232       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:14:32.287236       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:14:32.287676       1 config.go:315] "Starting node config controller"
	I1128 04:14:32.287693       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:14:32.400500       1 shared_informer.go:318] Caches are synced for node config
	I1128 04:14:32.402122       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:14:32.402194       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [8fcedf245f2834753643fc25630c10f6605c8865193f109055ccbf3b84fb442e] <==
	* W1128 04:14:09.848609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:14:09.848789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 04:14:09.848917       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:14:09.848957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 04:14:09.849040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 04:14:09.849089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 04:14:09.849182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:14:09.849217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 04:14:09.849369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:14:09.849464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:14:10.670378       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 04:14:10.670415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1128 04:14:10.671818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:14:10.671852       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 04:14:10.674375       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:14:10.674408       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:14:10.705115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 04:14:10.705234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1128 04:14:10.844527       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:14:10.844649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:14:10.857160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 04:14:10.857195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 04:14:10.874542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:14:10.874581       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1128 04:14:13.537293       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 28 04:19:34 addons-663058 kubelet[1341]: E1128 04:19:34.989912    1341 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d3ad0f9e515af31325a5ad9e73e04305eede0ff9786efc4e2d0ac8fa6ae2f0b1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d3ad0f9e515af31325a5ad9e73e04305eede0ff9786efc4e2d0ac8fa6ae2f0b1/diff: no such file or directory, extraDiskErr: <nil>
	Nov 28 04:19:37 addons-663058 kubelet[1341]: I1128 04:19:37.025094    1341 scope.go:117] "RemoveContainer" containerID="2563d949f6bb77f1cd4f4e1416018fb09b58e6a0aa10c59b3e20a59c45ef2a86"
	Nov 28 04:19:37 addons-663058 kubelet[1341]: E1128 04:19:37.025871    1341 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f49c6c00-2204-4a6d-b2f8-81f3c417d31b)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f49c6c00-2204-4a6d-b2f8-81f3c417d31b"
	Nov 28 04:19:44 addons-663058 kubelet[1341]: I1128 04:19:44.024993    1341 scope.go:117] "RemoveContainer" containerID="a490b84bde55946ceac9e9998558c6f0416e8f803c40977aded7c183ce216572"
	Nov 28 04:19:44 addons-663058 kubelet[1341]: I1128 04:19:44.449232    1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntgs7\" (UniqueName: \"kubernetes.io/projected/f49c6c00-2204-4a6d-b2f8-81f3c417d31b-kube-api-access-ntgs7\") pod \"f49c6c00-2204-4a6d-b2f8-81f3c417d31b\" (UID: \"f49c6c00-2204-4a6d-b2f8-81f3c417d31b\") "
	Nov 28 04:19:44 addons-663058 kubelet[1341]: I1128 04:19:44.454051    1341 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49c6c00-2204-4a6d-b2f8-81f3c417d31b-kube-api-access-ntgs7" (OuterVolumeSpecName: "kube-api-access-ntgs7") pod "f49c6c00-2204-4a6d-b2f8-81f3c417d31b" (UID: "f49c6c00-2204-4a6d-b2f8-81f3c417d31b"). InnerVolumeSpecName "kube-api-access-ntgs7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 28 04:19:44 addons-663058 kubelet[1341]: I1128 04:19:44.550254    1341 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ntgs7\" (UniqueName: \"kubernetes.io/projected/f49c6c00-2204-4a6d-b2f8-81f3c417d31b-kube-api-access-ntgs7\") on node \"addons-663058\" DevicePath \"\""
	Nov 28 04:19:44 addons-663058 kubelet[1341]: I1128 04:19:44.978101    1341 scope.go:117] "RemoveContainer" containerID="2563d949f6bb77f1cd4f4e1416018fb09b58e6a0aa10c59b3e20a59c45ef2a86"
	Nov 28 04:19:44 addons-663058 kubelet[1341]: I1128 04:19:44.981330    1341 scope.go:117] "RemoveContainer" containerID="3f6917d579c3b90530d5a3134c6e024a69ed262422851dc14020167f572978e8"
	Nov 28 04:19:44 addons-663058 kubelet[1341]: E1128 04:19:44.981612    1341 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-zgzbc_default(fefa8015-99f3-4be6-b998-2df9903b6e67)\"" pod="default/hello-world-app-5d77478584-zgzbc" podUID="fefa8015-99f3-4be6-b998-2df9903b6e67"
	Nov 28 04:19:45 addons-663058 kubelet[1341]: I1128 04:19:45.034181    1341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f49c6c00-2204-4a6d-b2f8-81f3c417d31b" path="/var/lib/kubelet/pods/f49c6c00-2204-4a6d-b2f8-81f3c417d31b/volumes"
	Nov 28 04:19:45 addons-663058 kubelet[1341]: I1128 04:19:45.066039    1341 scope.go:117] "RemoveContainer" containerID="a490b84bde55946ceac9e9998558c6f0416e8f803c40977aded7c183ce216572"
	Nov 28 04:19:47 addons-663058 kubelet[1341]: I1128 04:19:47.026317    1341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2747521b-f266-46e0-ab71-0839bfe87d2f" path="/var/lib/kubelet/pods/2747521b-f266-46e0-ab71-0839bfe87d2f/volumes"
	Nov 28 04:19:47 addons-663058 kubelet[1341]: I1128 04:19:47.026683    1341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="69ee64e1-78ab-447c-bd62-5f64ea639738" path="/var/lib/kubelet/pods/69ee64e1-78ab-447c-bd62-5f64ea639738/volumes"
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.196286    1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ea1dc22-b81a-473f-bc92-56090f53a7b9-webhook-cert\") pod \"8ea1dc22-b81a-473f-bc92-56090f53a7b9\" (UID: \"8ea1dc22-b81a-473f-bc92-56090f53a7b9\") "
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.196350    1341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5d5\" (UniqueName: \"kubernetes.io/projected/8ea1dc22-b81a-473f-bc92-56090f53a7b9-kube-api-access-hx5d5\") pod \"8ea1dc22-b81a-473f-bc92-56090f53a7b9\" (UID: \"8ea1dc22-b81a-473f-bc92-56090f53a7b9\") "
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.200766    1341 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea1dc22-b81a-473f-bc92-56090f53a7b9-kube-api-access-hx5d5" (OuterVolumeSpecName: "kube-api-access-hx5d5") pod "8ea1dc22-b81a-473f-bc92-56090f53a7b9" (UID: "8ea1dc22-b81a-473f-bc92-56090f53a7b9"). InnerVolumeSpecName "kube-api-access-hx5d5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.201325    1341 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea1dc22-b81a-473f-bc92-56090f53a7b9-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8ea1dc22-b81a-473f-bc92-56090f53a7b9" (UID: "8ea1dc22-b81a-473f-bc92-56090f53a7b9"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.297275    1341 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ea1dc22-b81a-473f-bc92-56090f53a7b9-webhook-cert\") on node \"addons-663058\" DevicePath \"\""
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.297334    1341 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hx5d5\" (UniqueName: \"kubernetes.io/projected/8ea1dc22-b81a-473f-bc92-56090f53a7b9-kube-api-access-hx5d5\") on node \"addons-663058\" DevicePath \"\""
	Nov 28 04:19:49 addons-663058 kubelet[1341]: I1128 04:19:49.994120    1341 scope.go:117] "RemoveContainer" containerID="5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028"
	Nov 28 04:19:50 addons-663058 kubelet[1341]: I1128 04:19:50.029806    1341 scope.go:117] "RemoveContainer" containerID="5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028"
	Nov 28 04:19:50 addons-663058 kubelet[1341]: E1128 04:19:50.030326    1341 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028\": container with ID starting with 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028 not found: ID does not exist" containerID="5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028"
	Nov 28 04:19:50 addons-663058 kubelet[1341]: I1128 04:19:50.030373    1341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028"} err="failed to get container status \"5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028\": rpc error: code = NotFound desc = could not find container \"5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028\": container with ID starting with 5a18f4cab6d0363e00309ea1e2e2e87447c87c8316d1b476a7a0eb38cc9d5028 not found: ID does not exist"
	Nov 28 04:19:51 addons-663058 kubelet[1341]: I1128 04:19:51.026039    1341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8ea1dc22-b81a-473f-bc92-56090f53a7b9" path="/var/lib/kubelet/pods/8ea1dc22-b81a-473f-bc92-56090f53a7b9/volumes"
	
	* 
	* ==> storage-provisioner [709553cc3b60ae1b7030aff18b38a0b35ff4c4168d7767d6be0eac87ccd9c8f3] <==
	* I1128 04:15:00.870067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:15:00.914071       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:15:00.914480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:15:00.951620       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:15:00.953767       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-663058_8f15bd45-7df5-4f32-806d-c80e6fd043cd!
	I1128 04:15:00.964224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d31ec48-8e02-4e38-8aff-b9a3a4e61c8e", APIVersion:"v1", ResourceVersion:"882", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-663058_8f15bd45-7df5-4f32-806d-c80e6fd043cd became leader
	I1128 04:15:01.056968       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-663058_8f15bd45-7df5-4f32-806d-c80e6fd043cd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-663058 -n addons-663058
helpers_test.go:261: (dbg) Run:  kubectl --context addons-663058 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.13s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.36s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-120112 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-120112 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.889863971s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-120112 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-120112 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [64d756ad-08fb-4d36-a9e8-93347a30292a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [64d756ad-08fb-4d36-a9e8-93347a30292a] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.019491106s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1128 04:28:53.035012 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.040427 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.050740 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.071120 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.111440 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.191730 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.352102 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:53.672631 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:54.313036 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:55.593487 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:28:58.153754 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:29:03.274029 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:29:13.514302 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-120112 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.419790304s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-120112 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1128 04:29:33.994554 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.024904884s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons disable ingress-dns --alsologtostderr -v=1: (2.820945799s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons disable ingress --alsologtostderr -v=1: (7.652279808s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-120112
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-120112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6add2fe13264c6e21406df194b6c8253dfb2eb36cf4bcd376f0d5cb106aca463",
	        "Created": "2023-11-28T04:25:18.74084233Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1290402,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:25:19.099341154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/6add2fe13264c6e21406df194b6c8253dfb2eb36cf4bcd376f0d5cb106aca463/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6add2fe13264c6e21406df194b6c8253dfb2eb36cf4bcd376f0d5cb106aca463/hostname",
	        "HostsPath": "/var/lib/docker/containers/6add2fe13264c6e21406df194b6c8253dfb2eb36cf4bcd376f0d5cb106aca463/hosts",
	        "LogPath": "/var/lib/docker/containers/6add2fe13264c6e21406df194b6c8253dfb2eb36cf4bcd376f0d5cb106aca463/6add2fe13264c6e21406df194b6c8253dfb2eb36cf4bcd376f0d5cb106aca463-json.log",
	        "Name": "/ingress-addon-legacy-120112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-120112:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-120112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/80fdf569deebc07ee41e49c111bcf69088c8d801e8bb96915369548d053e4a81-init/diff:/var/lib/docker/overlay2/cc610f7b23c869d03809246385f10f80b89207e6d90717a6a4867696f2289751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80fdf569deebc07ee41e49c111bcf69088c8d801e8bb96915369548d053e4a81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80fdf569deebc07ee41e49c111bcf69088c8d801e8bb96915369548d053e4a81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80fdf569deebc07ee41e49c111bcf69088c8d801e8bb96915369548d053e4a81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-120112",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-120112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-120112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-120112",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-120112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fb6b71ee812dfa66265dfe6abe0e2ed9a8588d61c7f5fdb41d2a1d22a06d809",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34339"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34338"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34335"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34337"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34336"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9fb6b71ee812",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-120112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6add2fe13264",
	                        "ingress-addon-legacy-120112"
	                    ],
	                    "NetworkID": "11751738cab24b3631fa5f0c843b8f8258a6e0513ece0eb7a880f5379d814d75",
	                    "EndpointID": "12a4c45183adb391476057501ffbbc5cb191b0f84d9e04088b9ffa1773d98e3b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-120112 -n ingress-addon-legacy-120112
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-120112 logs -n 25: (1.435967541s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-789811                                                      | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-789811 image ls                                             | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	| image          | functional-789811 image load --daemon                                  | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-789811               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811 image ls                                             | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	| image          | functional-789811 image save                                           | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-789811               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811 image rm                                             | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-789811               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811 image ls                                             | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	| image          | functional-789811 image load                                           | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811 image ls                                             | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	| image          | functional-789811 image save --daemon                                  | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-789811               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811                                                      | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811                                                      | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-789811 ssh pgrep                                            | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-789811                                                      | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811                                                      | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-789811 image build -t                                       | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	|                | localhost/my-image:functional-789811                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-789811 image ls                                             | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:24 UTC |
	| delete         | -p functional-789811                                                   | functional-789811           | jenkins | v1.32.0 | 28 Nov 23 04:24 UTC | 28 Nov 23 04:25 UTC |
	| start          | -p ingress-addon-legacy-120112                                         | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:25 UTC | 28 Nov 23 04:26 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-120112                                            | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:26 UTC | 28 Nov 23 04:26 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-120112                                            | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:26 UTC | 28 Nov 23 04:26 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-120112                                            | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:27 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-120112 ip                                         | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:29 UTC | 28 Nov 23 04:29 UTC |
	| addons         | ingress-addon-legacy-120112                                            | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:29 UTC | 28 Nov 23 04:29 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-120112                                            | ingress-addon-legacy-120112 | jenkins | v1.32.0 | 28 Nov 23 04:29 UTC | 28 Nov 23 04:29 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:25:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:25:00.865312 1289939 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:25:00.865523 1289939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:25:00.865552 1289939 out.go:309] Setting ErrFile to fd 2...
	I1128 04:25:00.865573 1289939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:25:00.865857 1289939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:25:00.866359 1289939 out.go:303] Setting JSON to false
	I1128 04:25:00.867851 1289939 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25636,"bootTime":1701119865,"procs":465,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:25:00.867962 1289939 start.go:138] virtualization:  
	I1128 04:25:00.870395 1289939 out.go:177] * [ingress-addon-legacy-120112] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:25:00.872860 1289939 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:25:00.873009 1289939 notify.go:220] Checking for updates...
	I1128 04:25:00.876599 1289939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:25:00.878804 1289939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:25:00.880586 1289939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:25:00.882532 1289939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:25:00.884292 1289939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:25:00.886280 1289939 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:25:00.915262 1289939 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:25:00.915406 1289939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:25:01.008543 1289939 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-28 04:25:00.995667284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:25:01.008815 1289939 docker.go:295] overlay module found
	I1128 04:25:01.011982 1289939 out.go:177] * Using the docker driver based on user configuration
	I1128 04:25:01.013810 1289939 start.go:298] selected driver: docker
	I1128 04:25:01.013832 1289939 start.go:902] validating driver "docker" against <nil>
	I1128 04:25:01.013846 1289939 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:25:01.014573 1289939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:25:01.084320 1289939 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-11-28 04:25:01.074910919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:25:01.084505 1289939 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 04:25:01.084798 1289939 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:25:01.086636 1289939 out.go:177] * Using Docker driver with root privileges
	I1128 04:25:01.088573 1289939 cni.go:84] Creating CNI manager for ""
	I1128 04:25:01.088594 1289939 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:25:01.088613 1289939 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 04:25:01.088630 1289939 start_flags.go:323] config:
	{Name:ingress-addon-legacy-120112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-120112 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:25:01.091910 1289939 out.go:177] * Starting control plane node ingress-addon-legacy-120112 in cluster ingress-addon-legacy-120112
	I1128 04:25:01.093758 1289939 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:25:01.095722 1289939 out.go:177] * Pulling base image ...
	I1128 04:25:01.097593 1289939 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 04:25:01.097690 1289939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:25:01.115991 1289939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 04:25:01.116016 1289939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 04:25:01.163579 1289939 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1128 04:25:01.163608 1289939 cache.go:56] Caching tarball of preloaded images
	I1128 04:25:01.163818 1289939 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 04:25:01.165983 1289939 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1128 04:25:01.167888 1289939 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:25:01.283672 1289939 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1128 04:25:10.680553 1289939 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:25:10.680682 1289939 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:25:11.870743 1289939 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1128 04:25:11.871171 1289939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/config.json ...
	I1128 04:25:11.871206 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/config.json: {Name:mk7b8dc154682609056a62059337ed9de1b8af24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:11.871383 1289939 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:25:11.871444 1289939 start.go:365] acquiring machines lock for ingress-addon-legacy-120112: {Name:mk960e6d4fb629404dc813851ae47b4544a9612b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:25:11.871505 1289939 start.go:369] acquired machines lock for "ingress-addon-legacy-120112" in 46.03µs
	I1128 04:25:11.871531 1289939 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-120112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-120112 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:25:11.871602 1289939 start.go:125] createHost starting for "" (driver="docker")
	I1128 04:25:11.874164 1289939 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1128 04:25:11.874387 1289939 start.go:159] libmachine.API.Create for "ingress-addon-legacy-120112" (driver="docker")
	I1128 04:25:11.874414 1289939 client.go:168] LocalClient.Create starting
	I1128 04:25:11.874523 1289939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:25:11.874562 1289939 main.go:141] libmachine: Decoding PEM data...
	I1128 04:25:11.874582 1289939 main.go:141] libmachine: Parsing certificate...
	I1128 04:25:11.874639 1289939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:25:11.874662 1289939 main.go:141] libmachine: Decoding PEM data...
	I1128 04:25:11.874675 1289939 main.go:141] libmachine: Parsing certificate...
	I1128 04:25:11.875028 1289939 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-120112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 04:25:11.893072 1289939 cli_runner.go:211] docker network inspect ingress-addon-legacy-120112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 04:25:11.893167 1289939 network_create.go:281] running [docker network inspect ingress-addon-legacy-120112] to gather additional debugging logs...
	I1128 04:25:11.893185 1289939 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-120112
	W1128 04:25:11.911355 1289939 cli_runner.go:211] docker network inspect ingress-addon-legacy-120112 returned with exit code 1
	I1128 04:25:11.911394 1289939 network_create.go:284] error running [docker network inspect ingress-addon-legacy-120112]: docker network inspect ingress-addon-legacy-120112: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-120112 not found
	I1128 04:25:11.911408 1289939 network_create.go:286] output of [docker network inspect ingress-addon-legacy-120112]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-120112 not found
	
	** /stderr **
	I1128 04:25:11.911536 1289939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:25:11.929341 1289939 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000fb70}
	I1128 04:25:11.929380 1289939 network_create.go:124] attempt to create docker network ingress-addon-legacy-120112 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1128 04:25:11.929451 1289939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-120112 ingress-addon-legacy-120112
	I1128 04:25:12.011553 1289939 network_create.go:108] docker network ingress-addon-legacy-120112 192.168.49.0/24 created
	I1128 04:25:12.011596 1289939 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-120112" container
	I1128 04:25:12.011704 1289939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:25:12.029595 1289939 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-120112 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-120112 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:25:12.049730 1289939 oci.go:103] Successfully created a docker volume ingress-addon-legacy-120112
	I1128 04:25:12.049828 1289939 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-120112-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-120112 --entrypoint /usr/bin/test -v ingress-addon-legacy-120112:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 04:25:13.581461 1289939 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-120112-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-120112 --entrypoint /usr/bin/test -v ingress-addon-legacy-120112:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.53157463s)
	I1128 04:25:13.581493 1289939 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-120112
	I1128 04:25:13.581521 1289939 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 04:25:13.581546 1289939 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 04:25:13.581636 1289939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-120112:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 04:25:18.650649 1289939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-120112:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (5.068962113s)
	I1128 04:25:18.650681 1289939 kic.go:203] duration metric: took 5.069132 seconds to extract preloaded images to volume
	W1128 04:25:18.650825 1289939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:25:18.650936 1289939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:25:18.722227 1289939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-120112 --name ingress-addon-legacy-120112 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-120112 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-120112 --network ingress-addon-legacy-120112 --ip 192.168.49.2 --volume ingress-addon-legacy-120112:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 04:25:19.108790 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Running}}
	I1128 04:25:19.137549 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Status}}
	I1128 04:25:19.159947 1289939 cli_runner.go:164] Run: docker exec ingress-addon-legacy-120112 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:25:19.235294 1289939 oci.go:144] the created container "ingress-addon-legacy-120112" has a running status.
	I1128 04:25:19.235322 1289939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa...
	I1128 04:25:19.909024 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1128 04:25:19.909071 1289939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 04:25:19.943654 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Status}}
	I1128 04:25:19.975813 1289939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 04:25:19.975833 1289939 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-120112 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 04:25:20.090204 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Status}}
	I1128 04:25:20.121967 1289939 machine.go:88] provisioning docker machine ...
	I1128 04:25:20.121999 1289939 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-120112"
	I1128 04:25:20.122069 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:20.144442 1289939 main.go:141] libmachine: Using SSH client type: native
	I1128 04:25:20.145152 1289939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34339 <nil> <nil>}
	I1128 04:25:20.145179 1289939 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-120112 && echo "ingress-addon-legacy-120112" | sudo tee /etc/hostname
	I1128 04:25:20.308379 1289939 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-120112
	
	I1128 04:25:20.308498 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:20.331473 1289939 main.go:141] libmachine: Using SSH client type: native
	I1128 04:25:20.331975 1289939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34339 <nil> <nil>}
	I1128 04:25:20.331999 1289939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-120112' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-120112/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-120112' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:25:20.474708 1289939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:25:20.474732 1289939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:25:20.474762 1289939 ubuntu.go:177] setting up certificates
	I1128 04:25:20.474773 1289939 provision.go:83] configureAuth start
	I1128 04:25:20.474838 1289939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-120112
	I1128 04:25:20.493324 1289939 provision.go:138] copyHostCerts
	I1128 04:25:20.493370 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:25:20.493404 1289939 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:25:20.493416 1289939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:25:20.493499 1289939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:25:20.493594 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:25:20.493616 1289939 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:25:20.493621 1289939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:25:20.493653 1289939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:25:20.493697 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:25:20.493717 1289939 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:25:20.493724 1289939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:25:20.493751 1289939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:25:20.493799 1289939 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-120112 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-120112]
	I1128 04:25:20.921032 1289939 provision.go:172] copyRemoteCerts
	I1128 04:25:20.921104 1289939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:25:20.921154 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:20.938719 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:25:21.036699 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 04:25:21.036770 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:25:21.068244 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 04:25:21.068316 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1128 04:25:21.099218 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 04:25:21.099284 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:25:21.129718 1289939 provision.go:86] duration metric: configureAuth took 654.928586ms
	I1128 04:25:21.129763 1289939 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:25:21.129983 1289939 config.go:182] Loaded profile config "ingress-addon-legacy-120112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1128 04:25:21.130121 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:21.148777 1289939 main.go:141] libmachine: Using SSH client type: native
	I1128 04:25:21.149196 1289939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34339 <nil> <nil>}
	I1128 04:25:21.149228 1289939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:25:21.439469 1289939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:25:21.439505 1289939 machine.go:91] provisioned docker machine in 1.317518294s
	I1128 04:25:21.439516 1289939 client.go:171] LocalClient.Create took 9.565092455s
	I1128 04:25:21.439530 1289939 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-120112" took 9.565142827s
	I1128 04:25:21.439538 1289939 start.go:300] post-start starting for "ingress-addon-legacy-120112" (driver="docker")
	I1128 04:25:21.439548 1289939 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:25:21.439624 1289939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:25:21.439683 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:21.458516 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:25:21.556019 1289939 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:25:21.560549 1289939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:25:21.560587 1289939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:25:21.560617 1289939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:25:21.560630 1289939 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 04:25:21.560642 1289939 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:25:21.560734 1289939 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:25:21.560826 1289939 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:25:21.560840 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> /etc/ssl/certs/12614152.pem
	I1128 04:25:21.560954 1289939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:25:21.571947 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:25:21.600395 1289939 start.go:303] post-start completed in 160.8425ms
	I1128 04:25:21.600897 1289939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-120112
	I1128 04:25:21.621063 1289939 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/config.json ...
	I1128 04:25:21.621338 1289939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:25:21.621388 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:21.643262 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:25:21.734802 1289939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:25:21.740852 1289939 start.go:128] duration metric: createHost completed in 9.869235237s
	I1128 04:25:21.740877 1289939 start.go:83] releasing machines lock for "ingress-addon-legacy-120112", held for 9.86935659s
	I1128 04:25:21.740948 1289939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-120112
	I1128 04:25:21.758539 1289939 ssh_runner.go:195] Run: cat /version.json
	I1128 04:25:21.758605 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:21.758543 1289939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:25:21.758705 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:25:21.777990 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:25:21.779709 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:25:22.009881 1289939 ssh_runner.go:195] Run: systemctl --version
	I1128 04:25:22.016307 1289939 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:25:22.162829 1289939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:25:22.168256 1289939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:25:22.192344 1289939 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:25:22.192493 1289939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:25:22.231791 1289939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1128 04:25:22.231856 1289939 start.go:472] detecting cgroup driver to use...
	I1128 04:25:22.231904 1289939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:25:22.231975 1289939 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:25:22.250690 1289939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:25:22.264087 1289939 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:25:22.264157 1289939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:25:22.280413 1289939 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:25:22.298660 1289939 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:25:22.407601 1289939 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:25:22.505002 1289939 docker.go:219] disabling docker service ...
	I1128 04:25:22.505078 1289939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:25:22.529147 1289939 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:25:22.543609 1289939 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:25:22.647717 1289939 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:25:22.759780 1289939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:25:22.774237 1289939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:25:22.797485 1289939 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1128 04:25:22.797557 1289939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:25:22.810533 1289939 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:25:22.810613 1289939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:25:22.823478 1289939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:25:22.836111 1289939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:25:22.848275 1289939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:25:22.859697 1289939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:25:22.869966 1289939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:25:22.880558 1289939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:25:22.978131 1289939 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:25:23.110526 1289939 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:25:23.110598 1289939 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:25:23.115886 1289939 start.go:540] Will wait 60s for crictl version
	I1128 04:25:23.115952 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:23.120524 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:25:23.163317 1289939 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 04:25:23.163415 1289939 ssh_runner.go:195] Run: crio --version
	I1128 04:25:23.208336 1289939 ssh_runner.go:195] Run: crio --version
	I1128 04:25:23.257349 1289939 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1128 04:25:23.259365 1289939 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-120112 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:25:23.277431 1289939 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1128 04:25:23.282259 1289939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:25:23.296239 1289939 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 04:25:23.296318 1289939 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:25:23.348130 1289939 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1128 04:25:23.348199 1289939 ssh_runner.go:195] Run: which lz4
	I1128 04:25:23.352906 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1128 04:25:23.353073 1289939 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:25:23.357566 1289939 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:25:23.357600 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1128 04:25:25.616470 1289939 crio.go:444] Took 2.263423 seconds to copy over tarball
	I1128 04:25:25.616598 1289939 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:25:28.305435 1289939 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.688774924s)
	I1128 04:25:28.305469 1289939 crio.go:451] Took 2.688918 seconds to extract the tarball
	I1128 04:25:28.305480 1289939 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:25:28.395089 1289939 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:25:28.436915 1289939 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1128 04:25:28.436942 1289939 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 04:25:28.436996 1289939 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:25:28.437218 1289939 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1128 04:25:28.437293 1289939 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 04:25:28.437369 1289939 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1128 04:25:28.437444 1289939 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1128 04:25:28.437515 1289939 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1128 04:25:28.437572 1289939 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1128 04:25:28.437600 1289939 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1128 04:25:28.438931 1289939 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1128 04:25:28.438987 1289939 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1128 04:25:28.439029 1289939 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 04:25:28.439221 1289939 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1128 04:25:28.439276 1289939 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1128 04:25:28.439350 1289939 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1128 04:25:28.438940 1289939 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:25:28.439465 1289939 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	W1128 04:25:28.774476 1289939 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.774655 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1128 04:25:28.794763 1289939 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.795061 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1128 04:25:28.802847 1289939 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.803108 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 04:25:28.808635 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1128 04:25:28.820297 1289939 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.820609 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1128 04:25:28.832181 1289939 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.832461 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1128 04:25:28.847671 1289939 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.847921 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1128 04:25:28.861114 1289939 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1128 04:25:28.861177 1289939 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1128 04:25:28.861229 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:28.947444 1289939 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1128 04:25:28.947545 1289939 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1128 04:25:28.947627 1289939 ssh_runner.go:195] Run: which crictl
	W1128 04:25:28.949327 1289939 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1128 04:25:28.949560 1289939 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:25:28.956944 1289939 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1128 04:25:28.957029 1289939 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 04:25:28.957119 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:28.999232 1289939 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1128 04:25:28.999285 1289939 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1128 04:25:28.999335 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:29.040498 1289939 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1128 04:25:29.040538 1289939 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1128 04:25:29.040589 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:29.062566 1289939 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1128 04:25:29.062612 1289939 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1128 04:25:29.062667 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:29.062759 1289939 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1128 04:25:29.062780 1289939 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1128 04:25:29.062805 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:29.062885 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1128 04:25:29.062954 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1128 04:25:29.144598 1289939 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1128 04:25:29.145038 1289939 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:25:29.145076 1289939 ssh_runner.go:195] Run: which crictl
	I1128 04:25:29.144817 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1128 04:25:29.144853 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1128 04:25:29.144940 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1128 04:25:29.144962 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1128 04:25:29.144983 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1128 04:25:29.145017 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1128 04:25:29.145252 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 04:25:29.283740 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1128 04:25:29.283826 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1128 04:25:29.283850 1289939 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:25:29.283864 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1128 04:25:29.283928 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1128 04:25:29.288801 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1128 04:25:29.349455 1289939 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1128 04:25:29.349537 1289939 cache_images.go:92] LoadImages completed in 912.581517ms
	W1128 04:25:29.349609 1289939 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1128 04:25:29.349679 1289939 ssh_runner.go:195] Run: crio config
	I1128 04:25:29.405737 1289939 cni.go:84] Creating CNI manager for ""
	I1128 04:25:29.405760 1289939 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:25:29.405789 1289939 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:25:29.405809 1289939 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-120112 NodeName:ingress-addon-legacy-120112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 04:25:29.405955 1289939 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-120112"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:25:29.406032 1289939 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-120112 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-120112 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:25:29.406104 1289939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1128 04:25:29.417081 1289939 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:25:29.417224 1289939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:25:29.428030 1289939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1128 04:25:29.449243 1289939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1128 04:25:29.470592 1289939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1128 04:25:29.492953 1289939 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1128 04:25:29.497433 1289939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:25:29.511341 1289939 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112 for IP: 192.168.49.2
	I1128 04:25:29.511381 1289939 certs.go:190] acquiring lock for shared ca certs: {Name:mka7cf71bac87c390cad9bb03b67c849db7103ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:29.511534 1289939 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key
	I1128 04:25:29.511589 1289939 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key
	I1128 04:25:29.511642 1289939 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.key
	I1128 04:25:29.511656 1289939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt with IP's: []
	I1128 04:25:29.682903 1289939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt ...
	I1128 04:25:29.682938 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: {Name:mkc15bf7386e8ce4d4bd3a3d6b17510e40f19062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:29.683153 1289939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.key ...
	I1128 04:25:29.683174 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.key: {Name:mk1aff5f8145d7afca5ac15d5969e7eff5b2b3e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:29.683262 1289939 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key.dd3b5fb2
	I1128 04:25:29.683275 1289939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 04:25:30.493698 1289939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt.dd3b5fb2 ...
	I1128 04:25:30.493733 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt.dd3b5fb2: {Name:mk02f09dce9389a53aad7602d5bba74b460821fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:30.493923 1289939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key.dd3b5fb2 ...
	I1128 04:25:30.493937 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key.dd3b5fb2: {Name:mkb2152caeb90522df75a0bda907706c02850679 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:30.494025 1289939 certs.go:337] copying /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt
	I1128 04:25:30.494119 1289939 certs.go:341] copying /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key
	I1128 04:25:30.494183 1289939 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.key
	I1128 04:25:30.494201 1289939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.crt with IP's: []
	I1128 04:25:30.800354 1289939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.crt ...
	I1128 04:25:30.800388 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.crt: {Name:mk36a1e2974d2cc71df25ac1edd1a890e223b67d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:30.800578 1289939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.key ...
	I1128 04:25:30.800592 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.key: {Name:mk87dca458c5a13a6f53e637676ec3f9e69e378f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:25:30.800696 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1128 04:25:30.800718 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1128 04:25:30.800734 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1128 04:25:30.800749 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1128 04:25:30.800765 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 04:25:30.800782 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 04:25:30.800795 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 04:25:30.800807 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 04:25:30.800866 1289939 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem (1338 bytes)
	W1128 04:25:30.800908 1289939 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415_empty.pem, impossibly tiny 0 bytes
	I1128 04:25:30.800923 1289939 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:25:30.800957 1289939 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem (1082 bytes)
	I1128 04:25:30.800984 1289939 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:25:30.801013 1289939 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem (1679 bytes)
	I1128 04:25:30.801084 1289939 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:25:30.801119 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem -> /usr/share/ca-certificates/1261415.pem
	I1128 04:25:30.801135 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> /usr/share/ca-certificates/12614152.pem
	I1128 04:25:30.801146 1289939 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:25:30.801793 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:25:30.831117 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:25:30.860029 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:25:30.889312 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 04:25:30.918418 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:25:30.946883 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:25:30.976057 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:25:31.007607 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1128 04:25:31.037648 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem --> /usr/share/ca-certificates/1261415.pem (1338 bytes)
	I1128 04:25:31.067418 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /usr/share/ca-certificates/12614152.pem (1708 bytes)
	I1128 04:25:31.097570 1289939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:25:31.128567 1289939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:25:31.151183 1289939 ssh_runner.go:195] Run: openssl version
	I1128 04:25:31.158794 1289939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:25:31.171054 1289939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:25:31.176243 1289939 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:25:31.176348 1289939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:25:31.185392 1289939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:25:31.197392 1289939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1261415.pem && ln -fs /usr/share/ca-certificates/1261415.pem /etc/ssl/certs/1261415.pem"
	I1128 04:25:31.209333 1289939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261415.pem
	I1128 04:25:31.214087 1289939 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 04:21 /usr/share/ca-certificates/1261415.pem
	I1128 04:25:31.214161 1289939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261415.pem
	I1128 04:25:31.223009 1289939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1261415.pem /etc/ssl/certs/51391683.0"
	I1128 04:25:31.235166 1289939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12614152.pem && ln -fs /usr/share/ca-certificates/12614152.pem /etc/ssl/certs/12614152.pem"
	I1128 04:25:31.247750 1289939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12614152.pem
	I1128 04:25:31.252795 1289939 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 04:21 /usr/share/ca-certificates/12614152.pem
	I1128 04:25:31.252918 1289939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12614152.pem
	I1128 04:25:31.261751 1289939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12614152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:25:31.273757 1289939 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:25:31.278156 1289939 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 04:25:31.278247 1289939 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-120112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-120112 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:25:31.278332 1289939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:25:31.278396 1289939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:25:31.320763 1289939 cri.go:89] found id: ""
	I1128 04:25:31.320836 1289939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:25:31.331939 1289939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:25:31.342974 1289939 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1128 04:25:31.343084 1289939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:25:31.353778 1289939 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:25:31.353826 1289939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1128 04:25:31.412957 1289939 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1128 04:25:31.413485 1289939 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:25:31.466012 1289939 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1128 04:25:31.466083 1289939 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1128 04:25:31.466121 1289939 kubeadm.go:322] OS: Linux
	I1128 04:25:31.466170 1289939 kubeadm.go:322] CGROUPS_CPU: enabled
	I1128 04:25:31.466220 1289939 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1128 04:25:31.466268 1289939 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1128 04:25:31.466316 1289939 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1128 04:25:31.466365 1289939 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1128 04:25:31.466422 1289939 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1128 04:25:31.558149 1289939 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:25:31.558259 1289939 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:25:31.558352 1289939 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:25:31.797820 1289939 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:25:31.799383 1289939 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:25:31.799670 1289939 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:25:31.905233 1289939 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:25:31.910332 1289939 out.go:204]   - Generating certificates and keys ...
	I1128 04:25:31.910481 1289939 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:25:31.910590 1289939 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:25:32.481941 1289939 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 04:25:33.087860 1289939 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 04:25:33.539752 1289939 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 04:25:34.175127 1289939 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 04:25:34.307466 1289939 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 04:25:34.307809 1289939 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-120112 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1128 04:25:35.091981 1289939 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 04:25:35.092398 1289939 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-120112 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1128 04:25:35.673314 1289939 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 04:25:36.197261 1289939 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 04:25:36.447092 1289939 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 04:25:36.447451 1289939 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:25:36.782650 1289939 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:25:36.976480 1289939 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:25:37.205050 1289939 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:25:38.405158 1289939 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:25:38.405805 1289939 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:25:38.408765 1289939 out.go:204]   - Booting up control plane ...
	I1128 04:25:38.408888 1289939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:25:38.416609 1289939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:25:38.416715 1289939 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:25:38.419116 1289939 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:25:38.423322 1289939 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:25:50.431576 1289939 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.008245 seconds
	I1128 04:25:50.431691 1289939 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:25:50.446604 1289939 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:25:50.969914 1289939 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:25:50.970057 1289939 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-120112 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:25:51.481969 1289939 kubeadm.go:322] [bootstrap-token] Using token: y7qrjp.spowz9wmixa09up4
	I1128 04:25:51.483709 1289939 out.go:204]   - Configuring RBAC rules ...
	I1128 04:25:51.483840 1289939 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:25:51.493641 1289939 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:25:51.501455 1289939 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:25:51.504567 1289939 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:25:51.507258 1289939 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:25:51.510711 1289939 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:25:51.520122 1289939 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:25:51.826633 1289939 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:25:51.913558 1289939 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:25:51.915097 1289939 kubeadm.go:322] 
	I1128 04:25:51.915167 1289939 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:25:51.915173 1289939 kubeadm.go:322] 
	I1128 04:25:51.915245 1289939 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:25:51.915250 1289939 kubeadm.go:322] 
	I1128 04:25:51.915274 1289939 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:25:51.915329 1289939 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:25:51.915377 1289939 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:25:51.915400 1289939 kubeadm.go:322] 
	I1128 04:25:51.915451 1289939 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:25:51.915520 1289939 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:25:51.915584 1289939 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:25:51.915591 1289939 kubeadm.go:322] 
	I1128 04:25:51.915669 1289939 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:25:51.915740 1289939 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:25:51.915744 1289939 kubeadm.go:322] 
	I1128 04:25:51.915822 1289939 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token y7qrjp.spowz9wmixa09up4 \
	I1128 04:25:51.915957 1289939 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c \
	I1128 04:25:51.915980 1289939 kubeadm.go:322]     --control-plane 
	I1128 04:25:51.915985 1289939 kubeadm.go:322] 
	I1128 04:25:51.916064 1289939 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:25:51.916068 1289939 kubeadm.go:322] 
	I1128 04:25:51.916419 1289939 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token y7qrjp.spowz9wmixa09up4 \
	I1128 04:25:51.916524 1289939 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c 
	I1128 04:25:51.918986 1289939 kubeadm.go:322] W1128 04:25:31.411842    1234 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1128 04:25:51.919198 1289939 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1128 04:25:51.919296 1289939 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:25:51.919415 1289939 kubeadm.go:322] W1128 04:25:38.413685    1234 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1128 04:25:51.919532 1289939 kubeadm.go:322] W1128 04:25:38.417586    1234 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1128 04:25:51.919553 1289939 cni.go:84] Creating CNI manager for ""
	I1128 04:25:51.919561 1289939 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:25:51.922209 1289939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 04:25:51.924367 1289939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:25:51.929403 1289939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1128 04:25:51.929426 1289939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:25:51.951570 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:25:52.379135 1289939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:25:52.379321 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:52.379436 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=ingress-addon-legacy-120112 minikube.k8s.io/updated_at=2023_11_28T04_25_52_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:52.522900 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:52.522801 1289939 ops.go:34] apiserver oom_adj: -16
	I1128 04:25:52.622044 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:53.214328 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:53.713755 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:54.214429 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:54.714389 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:55.214108 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:55.713746 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:56.214245 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:56.713834 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:57.213840 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:57.714538 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:58.213715 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:58.714362 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:59.214468 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:25:59.713744 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:00.214561 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:00.714073 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:01.214262 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:01.713696 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:02.214562 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:02.713872 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:03.214182 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:03.714504 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:04.213781 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:04.714569 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:05.214111 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:05.714697 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:06.214667 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:06.713805 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:07.214561 1289939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:26:07.319032 1289939 kubeadm.go:1081] duration metric: took 14.939782508s to wait for elevateKubeSystemPrivileges.
	I1128 04:26:07.319063 1289939 kubeadm.go:406] StartCluster complete in 36.040832609s
	I1128 04:26:07.319081 1289939 settings.go:142] acquiring lock: {Name:mk51bec1305a61d1e5f21881e1d4b01dfafff7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:26:07.319141 1289939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:26:07.319914 1289939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/kubeconfig: {Name:mkdd24900acdf0a7a11c60f4e6d81c9963f4153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:26:07.320848 1289939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:26:07.321114 1289939 config.go:182] Loaded profile config "ingress-addon-legacy-120112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1128 04:26:07.321284 1289939 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:26:07.321359 1289939 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-120112"
	I1128 04:26:07.321374 1289939 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-120112"
	I1128 04:26:07.321416 1289939 host.go:66] Checking if "ingress-addon-legacy-120112" exists ...
	I1128 04:26:07.321891 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Status}}
	I1128 04:26:07.320633 1289939 kapi.go:59] client config for ingress-addon-legacy-120112: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:26:07.323098 1289939 cert_rotation.go:137] Starting client certificate rotation controller
	I1128 04:26:07.323486 1289939 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-120112"
	I1128 04:26:07.323509 1289939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-120112"
	I1128 04:26:07.323793 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Status}}
	I1128 04:26:07.362518 1289939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:26:07.364803 1289939 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:26:07.364826 1289939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:26:07.364894 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:26:07.370588 1289939 kapi.go:59] client config for ingress-addon-legacy-120112: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:26:07.370926 1289939 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-120112"
	I1128 04:26:07.370982 1289939 host.go:66] Checking if "ingress-addon-legacy-120112" exists ...
	I1128 04:26:07.371519 1289939 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-120112 --format={{.State.Status}}
	I1128 04:26:07.406336 1289939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-120112" context rescaled to 1 replicas
	I1128 04:26:07.406374 1289939 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:26:07.408594 1289939 out.go:177] * Verifying Kubernetes components...
	I1128 04:26:07.411028 1289939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:26:07.410937 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:26:07.426681 1289939 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:26:07.426705 1289939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:26:07.426771 1289939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-120112
	I1128 04:26:07.471841 1289939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34339 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/ingress-addon-legacy-120112/id_rsa Username:docker}
	I1128 04:26:07.612349 1289939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:26:07.641245 1289939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:26:07.729721 1289939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:26:07.730490 1289939 kapi.go:59] client config for ingress-addon-legacy-120112: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:26:07.730809 1289939 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-120112" to be "Ready" ...
	I1128 04:26:08.145256 1289939 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1128 04:26:08.157274 1289939 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 04:26:08.158951 1289939 addons.go:502] enable addons completed in 837.657307ms: enabled=[storage-provisioner default-storageclass]
	I1128 04:26:09.999729 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:12.499258 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:14.499885 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:16.998910 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:19.499071 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:21.499541 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:23.998823 1289939 node_ready.go:58] node "ingress-addon-legacy-120112" has status "Ready":"False"
	I1128 04:26:25.499782 1289939 node_ready.go:49] node "ingress-addon-legacy-120112" has status "Ready":"True"
	I1128 04:26:25.499811 1289939 node_ready.go:38] duration metric: took 17.768971049s waiting for node "ingress-addon-legacy-120112" to be "Ready" ...
	I1128 04:26:25.499823 1289939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:26:25.507484 1289939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:27.515619 1289939 pod_ready.go:102] pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-28 04:26:07 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1128 04:26:29.519309 1289939 pod_ready.go:102] pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:26:31.519654 1289939 pod_ready.go:102] pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:26:34.019673 1289939 pod_ready.go:102] pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:26:36.019804 1289939 pod_ready.go:102] pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:26:37.020062 1289939 pod_ready.go:92] pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:26:37.020110 1289939 pod_ready.go:81] duration metric: took 11.512588496s waiting for pod "coredns-66bff467f8-cqfdx" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.020132 1289939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.027318 1289939 pod_ready.go:92] pod "etcd-ingress-addon-legacy-120112" in "kube-system" namespace has status "Ready":"True"
	I1128 04:26:37.027346 1289939 pod_ready.go:81] duration metric: took 7.205616ms waiting for pod "etcd-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.027363 1289939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.033493 1289939 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-120112" in "kube-system" namespace has status "Ready":"True"
	I1128 04:26:37.033524 1289939 pod_ready.go:81] duration metric: took 6.151312ms waiting for pod "kube-apiserver-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.033538 1289939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.039108 1289939 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-120112" in "kube-system" namespace has status "Ready":"True"
	I1128 04:26:37.039141 1289939 pod_ready.go:81] duration metric: took 5.594622ms waiting for pod "kube-controller-manager-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.039155 1289939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lrwpw" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.046778 1289939 pod_ready.go:92] pod "kube-proxy-lrwpw" in "kube-system" namespace has status "Ready":"True"
	I1128 04:26:37.046817 1289939 pod_ready.go:81] duration metric: took 7.641553ms waiting for pod "kube-proxy-lrwpw" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.046845 1289939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.214282 1289939 request.go:629] Waited for 167.319613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-120112
	I1128 04:26:37.414793 1289939 request.go:629] Waited for 196.519988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-120112
	I1128 04:26:37.417747 1289939 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-120112" in "kube-system" namespace has status "Ready":"True"
	I1128 04:26:37.417774 1289939 pod_ready.go:81] duration metric: took 370.920156ms waiting for pod "kube-scheduler-ingress-addon-legacy-120112" in "kube-system" namespace to be "Ready" ...
	I1128 04:26:37.417788 1289939 pod_ready.go:38] duration metric: took 11.91795014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:26:37.417805 1289939 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:26:37.417875 1289939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:26:37.431183 1289939 api_server.go:72] duration metric: took 30.024758568s to wait for apiserver process to appear ...
	I1128 04:26:37.431208 1289939 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:26:37.431226 1289939 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1128 04:26:37.440275 1289939 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1128 04:26:37.441508 1289939 api_server.go:141] control plane version: v1.18.20
	I1128 04:26:37.441534 1289939 api_server.go:131] duration metric: took 10.318101ms to wait for apiserver health ...
	I1128 04:26:37.441543 1289939 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:26:37.613950 1289939 request.go:629] Waited for 172.275846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:26:37.620296 1289939 system_pods.go:59] 8 kube-system pods found
	I1128 04:26:37.620338 1289939 system_pods.go:61] "coredns-66bff467f8-cqfdx" [f33edad0-810e-46ea-b725-6d11a38d29bb] Running
	I1128 04:26:37.620345 1289939 system_pods.go:61] "etcd-ingress-addon-legacy-120112" [e1355a7c-ad48-42e0-8c8f-cb4b553e31e2] Running
	I1128 04:26:37.620350 1289939 system_pods.go:61] "kindnet-kd4j5" [aaddbc0c-4503-46ff-88b2-024653dd7643] Running
	I1128 04:26:37.620355 1289939 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-120112" [09ef6cac-18dd-4104-96d5-8610687d9e9d] Running
	I1128 04:26:37.620387 1289939 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-120112" [99730462-27de-4b49-ac8a-9a0f29751c41] Running
	I1128 04:26:37.620399 1289939 system_pods.go:61] "kube-proxy-lrwpw" [d3662691-b2c1-4be6-8187-1b7ef958fec3] Running
	I1128 04:26:37.620404 1289939 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-120112" [f5e0236b-6c6e-40c8-af6b-7b41029c0153] Running
	I1128 04:26:37.620410 1289939 system_pods.go:61] "storage-provisioner" [d469f74d-4cfd-479b-8fc5-21ccc7399f3b] Running
	I1128 04:26:37.620416 1289939 system_pods.go:74] duration metric: took 178.867246ms to wait for pod list to return data ...
	I1128 04:26:37.620519 1289939 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:26:37.813887 1289939 request.go:629] Waited for 193.292353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1128 04:26:37.816565 1289939 default_sa.go:45] found service account: "default"
	I1128 04:26:37.816589 1289939 default_sa.go:55] duration metric: took 196.062383ms for default service account to be created ...
	I1128 04:26:37.816599 1289939 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:26:38.013959 1289939 request.go:629] Waited for 197.266429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:26:38.020491 1289939 system_pods.go:86] 8 kube-system pods found
	I1128 04:26:38.020523 1289939 system_pods.go:89] "coredns-66bff467f8-cqfdx" [f33edad0-810e-46ea-b725-6d11a38d29bb] Running
	I1128 04:26:38.020536 1289939 system_pods.go:89] "etcd-ingress-addon-legacy-120112" [e1355a7c-ad48-42e0-8c8f-cb4b553e31e2] Running
	I1128 04:26:38.020542 1289939 system_pods.go:89] "kindnet-kd4j5" [aaddbc0c-4503-46ff-88b2-024653dd7643] Running
	I1128 04:26:38.020570 1289939 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-120112" [09ef6cac-18dd-4104-96d5-8610687d9e9d] Running
	I1128 04:26:38.020584 1289939 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-120112" [99730462-27de-4b49-ac8a-9a0f29751c41] Running
	I1128 04:26:38.020589 1289939 system_pods.go:89] "kube-proxy-lrwpw" [d3662691-b2c1-4be6-8187-1b7ef958fec3] Running
	I1128 04:26:38.020596 1289939 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-120112" [f5e0236b-6c6e-40c8-af6b-7b41029c0153] Running
	I1128 04:26:38.020605 1289939 system_pods.go:89] "storage-provisioner" [d469f74d-4cfd-479b-8fc5-21ccc7399f3b] Running
	I1128 04:26:38.020613 1289939 system_pods.go:126] duration metric: took 204.007875ms to wait for k8s-apps to be running ...
	I1128 04:26:38.020624 1289939 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:26:38.020757 1289939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:26:38.038027 1289939 system_svc.go:56] duration metric: took 17.377158ms WaitForService to wait for kubelet.
	I1128 04:26:38.038064 1289939 kubeadm.go:581] duration metric: took 30.631664289s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:26:38.038085 1289939 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:26:38.214515 1289939 request.go:629] Waited for 176.342369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1128 04:26:38.217421 1289939 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:26:38.217452 1289939 node_conditions.go:123] node cpu capacity is 2
	I1128 04:26:38.217463 1289939 node_conditions.go:105] duration metric: took 179.371359ms to run NodePressure ...
	I1128 04:26:38.217476 1289939 start.go:228] waiting for startup goroutines ...
	I1128 04:26:38.217483 1289939 start.go:233] waiting for cluster config update ...
	I1128 04:26:38.217492 1289939 start.go:242] writing updated cluster config ...
	I1128 04:26:38.217809 1289939 ssh_runner.go:195] Run: rm -f paused
	I1128 04:26:38.278443 1289939 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1128 04:26:38.280822 1289939 out.go:177] 
	W1128 04:26:38.282615 1289939 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1128 04:26:38.284586 1289939 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1128 04:26:38.286578 1289939 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-120112" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 28 04:29:41 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:41.919404816Z" level=info msg="Removing container: 715cf19d697109a1ea50410feb7fdd1e0e3fc7865ca59c4c19b29c63b7e5200e" id=6bccfa86-b82d-4947-ab6c-01a5cb2189f7 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Nov 28 04:29:41 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:41.945292216Z" level=info msg="Removed container 715cf19d697109a1ea50410feb7fdd1e0e3fc7865ca59c4c19b29c63b7e5200e: default/hello-world-app-5f5d8b66bb-nz5q2/hello-world-app" id=6bccfa86-b82d-4947-ab6c-01a5cb2189f7 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Nov 28 04:29:41 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:41.959139226Z" level=info msg="Stopping pod sandbox: 757a95eda2fd4291dcde2cb26f7066213e3a17f605dde1cc1334d40706a7cf87" id=b10fd01b-c070-4ef4-9b8a-af01d347a9f9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:41 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:41.959188694Z" level=info msg="Stopped pod sandbox (already stopped): 757a95eda2fd4291dcde2cb26f7066213e3a17f605dde1cc1334d40706a7cf87" id=b10fd01b-c070-4ef4-9b8a-af01d347a9f9 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:42 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:42.959367173Z" level=info msg="Stopping container: 448e25ddf743aa3daff11b2fafb2c7f4f72ead26f1fcd0eb97f0d8a0e98294c7 (timeout: 2s)" id=ca54e56a-e8e4-4d24-853c-3ca9705da587 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 28 04:29:42 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:42.968246750Z" level=info msg="Stopping container: 448e25ddf743aa3daff11b2fafb2c7f4f72ead26f1fcd0eb97f0d8a0e98294c7 (timeout: 2s)" id=a6023c48-6ec5-4e7c-9640-05053f03dad1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 28 04:29:43 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:43.214717125Z" level=info msg="Stopping pod sandbox: 757a95eda2fd4291dcde2cb26f7066213e3a17f605dde1cc1334d40706a7cf87" id=a269a8cf-c8b9-4ea1-86f5-4221ae9ea91b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:43 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:43.214763032Z" level=info msg="Stopped pod sandbox (already stopped): 757a95eda2fd4291dcde2cb26f7066213e3a17f605dde1cc1334d40706a7cf87" id=a269a8cf-c8b9-4ea1-86f5-4221ae9ea91b name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:44 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:44.978891694Z" level=warning msg="Stopping container 448e25ddf743aa3daff11b2fafb2c7f4f72ead26f1fcd0eb97f0d8a0e98294c7 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ca54e56a-e8e4-4d24-853c-3ca9705da587 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 28 04:29:45 ingress-addon-legacy-120112 conmon[2733]: conmon 448e25ddf743aa3daff1 <ninfo>: container 2744 exited with status 137
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.205183293Z" level=info msg="Stopped container 448e25ddf743aa3daff11b2fafb2c7f4f72ead26f1fcd0eb97f0d8a0e98294c7: ingress-nginx/ingress-nginx-controller-7fcf777cb7-72kkf/controller" id=a6023c48-6ec5-4e7c-9640-05053f03dad1 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.205935937Z" level=info msg="Stopping pod sandbox: da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" id=d0220422-a69e-4f99-862a-1b2a34e9d1dc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.206391098Z" level=info msg="Stopped container 448e25ddf743aa3daff11b2fafb2c7f4f72ead26f1fcd0eb97f0d8a0e98294c7: ingress-nginx/ingress-nginx-controller-7fcf777cb7-72kkf/controller" id=ca54e56a-e8e4-4d24-853c-3ca9705da587 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.206992720Z" level=info msg="Stopping pod sandbox: da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" id=a85d8162-d6e6-4725-a424-de84397133f8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.210806345Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-3O52DJSWQR4NZ6XA - [0:0]\n:KUBE-HP-ICZLS6BSMBFTYYAY - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-ICZLS6BSMBFTYYAY\n-X KUBE-HP-3O52DJSWQR4NZ6XA\nCOMMIT\n"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.228700128Z" level=info msg="Closing host port tcp:80"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.228763882Z" level=info msg="Closing host port tcp:443"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.230469531Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.230510876Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.230713058Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-72kkf Namespace:ingress-nginx ID:da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26 UID:3d14cdc1-9ae0-49df-b23f-84b47355a522 NetNS:/var/run/netns/b227c64d-39d8-4f15-ab8f-86493c3e726d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.230862275Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-72kkf from CNI network \"kindnet\" (type=ptp)"
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.276039315Z" level=info msg="Stopped pod sandbox: da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" id=d0220422-a69e-4f99-862a-1b2a34e9d1dc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:45 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:45.276188279Z" level=info msg="Stopped pod sandbox (already stopped): da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" id=a85d8162-d6e6-4725-a424-de84397133f8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:47 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:47.214868686Z" level=info msg="Stopping pod sandbox: da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" id=e63286ec-f790-4ed0-92a3-dcb00dab08ee name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Nov 28 04:29:47 ingress-addon-legacy-120112 crio[904]: time="2023-11-28 04:29:47.214914520Z" level=info msg="Stopped pod sandbox (already stopped): da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" id=e63286ec-f790-4ed0-92a3-dcb00dab08ee name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	57ebf8f8fd8fa       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                   9 seconds ago       Exited              hello-world-app           2                   9c23b15ac2d3c       hello-world-app-5f5d8b66bb-nz5q2
	b9ac33713f1f8       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                    2 minutes ago       Running             nginx                     0                   2b29dd8a411ec       nginx
	448e25ddf743a       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   da4e4c1c5c89e       ingress-nginx-controller-7fcf777cb7-72kkf
	9c0f84c7baae8       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   66a699f88e80d       ingress-nginx-admission-patch-z8sqs
	b50d8ab3b044b       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   a2f6779612938       ingress-nginx-admission-create-9bkgt
	e43ee0613ed98       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   ce540fb93bd07       storage-provisioner
	43d5629bc068d       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   3776865eb4990       coredns-66bff467f8-cqfdx
	71c28ae6dbb75       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   f5362865d330e       kindnet-kd4j5
	3f50760d2f24a       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   304e7607d001e       kube-proxy-lrwpw
	8416bd98988a8       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   46ddc680f5b67       etcd-ingress-addon-legacy-120112
	e123332955dc3       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   2b6d0a8039a5d       kube-apiserver-ingress-addon-legacy-120112
	2c385cac2def0       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   42f8b10339c8c       kube-scheduler-ingress-addon-legacy-120112
	77d528e3b8230       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   b528d773eeb81       kube-controller-manager-ingress-addon-legacy-120112
	
	* 
	* ==> coredns [43d5629bc068dccdf107e9520e72bc0212a95fdbc347a0e2be42c54b9fe52102] <==
	* [INFO] 10.244.0.5:57921 - 4132 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002395016s
	[INFO] 10.244.0.5:53813 - 29520 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038826s
	[INFO] 10.244.0.5:53813 - 7840 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003055869s
	[INFO] 10.244.0.5:57921 - 21192 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00488215s
	[INFO] 10.244.0.5:57921 - 19440 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000249984s
	[INFO] 10.244.0.5:53813 - 45431 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001091563s
	[INFO] 10.244.0.5:53813 - 31614 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056681s
	[INFO] 10.244.0.5:37327 - 42085 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098937s
	[INFO] 10.244.0.5:58460 - 27934 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006459s
	[INFO] 10.244.0.5:58460 - 16804 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000045259s
	[INFO] 10.244.0.5:37327 - 44459 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000060907s
	[INFO] 10.244.0.5:37327 - 44099 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036676s
	[INFO] 10.244.0.5:58460 - 19719 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032435s
	[INFO] 10.244.0.5:37327 - 13941 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065583s
	[INFO] 10.244.0.5:58460 - 24672 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048689s
	[INFO] 10.244.0.5:37327 - 17617 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042322s
	[INFO] 10.244.0.5:58460 - 61256 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000175236s
	[INFO] 10.244.0.5:37327 - 61961 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032698s
	[INFO] 10.244.0.5:58460 - 14748 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030917s
	[INFO] 10.244.0.5:58460 - 14549 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001309803s
	[INFO] 10.244.0.5:37327 - 61642 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000969767s
	[INFO] 10.244.0.5:37327 - 30467 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000879618s
	[INFO] 10.244.0.5:58460 - 17553 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001009767s
	[INFO] 10.244.0.5:58460 - 22916 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003753s
	[INFO] 10.244.0.5:37327 - 61756 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000029021s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-120112
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-120112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=ingress-addon-legacy-120112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_25_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:25:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-120112
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:29:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:27:25 +0000   Tue, 28 Nov 2023 04:25:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:27:25 +0000   Tue, 28 Nov 2023 04:25:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:27:25 +0000   Tue, 28 Nov 2023 04:25:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:27:25 +0000   Tue, 28 Nov 2023 04:26:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-120112
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 0950d7c0ea334691b460d37fc88236e4
	  System UUID:                786d0f5a-56b3-4841-9efe-46ab4b1097b7
	  Boot ID:                    29ce650a-e22a-4e0d-bffe-126490eafcf6
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-nz5q2                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-cqfdx                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m45s
	  kube-system                 etcd-ingress-addon-legacy-120112                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-kd4j5                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m44s
	  kube-system                 kube-apiserver-ingress-addon-legacy-120112             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-120112    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-lrwpw                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-120112             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m11s (x5 over 4m11s)  kubelet     Node ingress-addon-legacy-120112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x5 over 4m11s)  kubelet     Node ingress-addon-legacy-120112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x4 over 4m11s)  kubelet     Node ingress-addon-legacy-120112 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s                  kubelet     Node ingress-addon-legacy-120112 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s                  kubelet     Node ingress-addon-legacy-120112 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s                  kubelet     Node ingress-addon-legacy-120112 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m42s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m26s                  kubelet     Node ingress-addon-legacy-120112 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001139] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001123] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +0.003665] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=0000000000e844bc
	[  +0.001111] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=00000000dbb2bfbf
	[  +0.001088] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +2.166969] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001027] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000001868d326
	[  +0.001142] FS-Cache: O-key=[8] '4e415c0100000000'
	[  +0.000817] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001108] FS-Cache: N-key=[8] '4e415c0100000000'
	[  +0.392945] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001131] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000007d358928
	[  +0.001121] FS-Cache: O-key=[8] '54415c0100000000'
	[  +0.000768] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000007d93f4ca
	[  +0.001181] FS-Cache: N-key=[8] '54415c0100000000'
	
	* 
	* ==> etcd [8416bd98988a8676d96805cc9809fa805d1bc704b0609cc389804dd53272b756] <==
	* raft2023/11/28 04:25:43 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/28 04:25:43 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-28 04:25:43.928833 W | auth: simple token is not cryptographically signed
	2023-11-28 04:25:43.969633 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-28 04:25:44.180811 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/28 04:25:44 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-28 04:25:44.182159 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-28 04:25:44.262025 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-28 04:25:44.278823 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-28 04:25:44.304769 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/28 04:25:44 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/28 04:25:44 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/28 04:25:44 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/28 04:25:44 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/28 04:25:44 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-28 04:25:44.412852 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-28 04:25:44.428809 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-28 04:25:44.435950 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-28 04:25:44.436040 I | etcdserver: published {Name:ingress-addon-legacy-120112 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-28 04:25:44.444718 I | embed: ready to serve client requests
	2023-11-28 04:25:44.452698 I | embed: ready to serve client requests
	2023-11-28 04:25:44.629450 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-28 04:25:44.632753 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-28 04:26:07.713004 W | etcdserver: request "header:<ID:8128025445779302275 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-kd4j5\" mod_revision:360 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-kd4j5\" value_size:3758 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-kd4j5\" > >>" with result "size:16" took too long (102.97505ms) to execute
	2023-11-28 04:26:07.876321 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-120112\" " with result "range_response_count:1 size:6504" took too long (128.701711ms) to execute
	
	* 
	* ==> kernel <==
	*  04:29:51 up  7:12,  0 users,  load average: 0.58, 1.14, 1.80
	Linux ingress-addon-legacy-120112 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [71c28ae6dbb75ffc3f92ad9d54494169f7c7b44e0a8077703cf0a9eefe3e8eed] <==
	* I1128 04:27:41.790034       1 main.go:227] handling current node
	I1128 04:27:51.794359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:27:51.794389       1 main.go:227] handling current node
	I1128 04:28:01.801796       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:28:01.801829       1 main.go:227] handling current node
	I1128 04:28:11.805549       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:28:11.805737       1 main.go:227] handling current node
	I1128 04:28:21.817531       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:28:21.817560       1 main.go:227] handling current node
	I1128 04:28:31.821167       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:28:31.821199       1 main.go:227] handling current node
	I1128 04:28:41.827052       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:28:41.827084       1 main.go:227] handling current node
	I1128 04:28:51.831848       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:28:51.831881       1 main.go:227] handling current node
	I1128 04:29:01.842458       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:29:01.842488       1 main.go:227] handling current node
	I1128 04:29:11.852342       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:29:11.852376       1 main.go:227] handling current node
	I1128 04:29:21.863148       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:29:21.863183       1 main.go:227] handling current node
	I1128 04:29:31.869273       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:29:31.869302       1 main.go:227] handling current node
	I1128 04:29:41.877051       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1128 04:29:41.877081       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [e123332955dc3b37ea75a085384661faaa87c2184026a05072dd2ce0a9fece35] <==
	* I1128 04:25:48.885654       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1128 04:25:48.888365       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1128 04:25:48.888737       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 04:25:48.888789       1 cache.go:39] Caches are synced for autoregister controller
	I1128 04:25:48.895605       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1128 04:25:49.684388       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1128 04:25:49.684433       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1128 04:25:49.693786       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1128 04:25:49.697049       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1128 04:25:49.697138       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1128 04:25:50.203305       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 04:25:50.243830       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1128 04:25:50.381185       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1128 04:25:50.382229       1 controller.go:609] quota admission added evaluator for: endpoints
	I1128 04:25:50.386067       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1128 04:25:51.123287       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1128 04:25:51.782405       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1128 04:25:51.890253       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1128 04:25:55.135194       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 04:26:06.977795       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1128 04:26:07.491995       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1128 04:26:39.235395       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1128 04:27:04.888616       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1128 04:29:42.980077       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1128 04:29:43.837259       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [77d528e3b82307f7a538eb559ea06875aadd7c4f5156d4c95bef12610f7e786f] <==
	* I1128 04:26:07.418638       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"45778803-986b-4f51-8af9-b3de2cea1148", APIVersion:"apps/v1", ResourceVersion:"344", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1128 04:26:07.421832       1 shared_informer.go:230] Caches are synced for HPA 
	I1128 04:26:07.421975       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I1128 04:26:07.469191       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1128 04:26:07.520906       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1128 04:26:07.533131       1 shared_informer.go:230] Caches are synced for stateful set 
	I1128 04:26:07.541428       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1128 04:26:07.573473       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1128 04:26:07.591791       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e68536cc-dbad-4908-b089-b623c53ca680", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-lrwpw
	I1128 04:26:07.598966       1 shared_informer.go:230] Caches are synced for resource quota 
	I1128 04:26:07.620944       1 shared_informer.go:230] Caches are synced for resource quota 
	I1128 04:26:07.626241       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1128 04:26:07.637005       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1128 04:26:07.635047       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1128 04:26:07.724953       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"cae21b18-1f97-4957-8978-56bc303fed03", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-kd4j5
	I1128 04:26:07.725135       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"e8767af0-bc08-4ae8-ac0d-8a7c9e6d81ae", APIVersion:"apps/v1", ResourceVersion:"345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-v9rw6
	I1128 04:26:27.273917       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1128 04:26:39.240396       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3152eba5-7333-45c5-a561-d980a0c74229", APIVersion:"apps/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-72kkf
	I1128 04:26:39.241006       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a0802c8c-ae18-47fe-a5f6-4af5512d89e7", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1128 04:26:39.268886       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"91ce5091-365a-4b06-bcdb-de563cc17666", APIVersion:"batch/v1", ResourceVersion:"487", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-9bkgt
	I1128 04:26:39.333065       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"8dfab62e-9738-4f12-9f21-8aeddb2f176e", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-z8sqs
	I1128 04:26:41.591663       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"91ce5091-365a-4b06-bcdb-de563cc17666", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1128 04:26:42.593800       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"8dfab62e-9738-4f12-9f21-8aeddb2f176e", APIVersion:"batch/v1", ResourceVersion:"512", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1128 04:29:23.746689       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"76a637f0-09fc-4d9c-a528-36b41d185b01", APIVersion:"apps/v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1128 04:29:23.764427       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d9b369da-96ef-4216-ac38-6982999d18e8", APIVersion:"apps/v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-nz5q2
	
	* 
	* ==> kube-proxy [3f50760d2f24a43d3d697e25bcf49cfc7616c7f104ea79ac85327de14031c9da] <==
	* W1128 04:26:09.899679       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1128 04:26:09.951365       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1128 04:26:09.951492       1 server_others.go:186] Using iptables Proxier.
	I1128 04:26:09.951860       1 server.go:583] Version: v1.18.20
	I1128 04:26:09.952999       1 config.go:315] Starting service config controller
	I1128 04:26:09.953112       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1128 04:26:09.953243       1 config.go:133] Starting endpoints config controller
	I1128 04:26:09.953280       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1128 04:26:10.056756       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1128 04:26:10.056756       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [2c385cac2def0aed6503759173760687934c74745b77093b166d7a4042fee76f] <==
	* W1128 04:25:48.861514       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 04:25:48.887753       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1128 04:25:48.887848       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1128 04:25:48.890893       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 04:25:48.890999       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 04:25:48.892105       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1128 04:25:48.892237       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	E1128 04:25:48.902264       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:25:48.902761       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:25:48.902902       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:25:48.903001       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:25:48.903100       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:25:48.903220       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:25:48.903311       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:25:48.903407       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:25:48.903501       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:25:48.903592       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:25:48.903688       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:25:48.903780       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 04:25:49.844068       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:25:49.872850       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:25:49.974958       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1128 04:25:50.491172       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1128 04:26:07.036712       1 factory.go:503] pod: kube-system/coredns-66bff467f8-v9rw6 is already present in the active queue
	E1128 04:26:07.064755       1 factory.go:503] pod: kube-system/coredns-66bff467f8-cqfdx is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Nov 28 04:29:27 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:27.888226    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a27e2b2cef0362e6b34bac451a580836efd2602877484eb89d3c7f6af7a70e20
	Nov 28 04:29:27 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:27.888458    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 715cf19d697109a1ea50410feb7fdd1e0e3fc7865ca59c4c19b29c63b7e5200e
	Nov 28 04:29:27 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:27.888724    1618 pod_workers.go:191] Error syncing pod 703f0ebf-57c2-46e4-a96c-3dfd1725913f ("hello-world-app-5f5d8b66bb-nz5q2_default(703f0ebf-57c2-46e4-a96c-3dfd1725913f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nz5q2_default(703f0ebf-57c2-46e4-a96c-3dfd1725913f)"
	Nov 28 04:29:28 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:28.890951    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 715cf19d697109a1ea50410feb7fdd1e0e3fc7865ca59c4c19b29c63b7e5200e
	Nov 28 04:29:28 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:28.891212    1618 pod_workers.go:191] Error syncing pod 703f0ebf-57c2-46e4-a96c-3dfd1725913f ("hello-world-app-5f5d8b66bb-nz5q2_default(703f0ebf-57c2-46e4-a96c-3dfd1725913f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nz5q2_default(703f0ebf-57c2-46e4-a96c-3dfd1725913f)"
	Nov 28 04:29:35 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:35.216085    1618 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 28 04:29:35 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:35.216130    1618 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 28 04:29:35 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:35.216196    1618 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 28 04:29:35 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:35.216232    1618 pod_workers.go:191] Error syncing pod 2b3e72a9-8a31-49a6-90cd-7073f9419f43 ("kube-ingress-dns-minikube_kube-system(2b3e72a9-8a31-49a6-90cd-7073f9419f43)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 28 04:29:39 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:39.841799    1618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-hpvlt" (UniqueName: "kubernetes.io/secret/2b3e72a9-8a31-49a6-90cd-7073f9419f43-minikube-ingress-dns-token-hpvlt") pod "2b3e72a9-8a31-49a6-90cd-7073f9419f43" (UID: "2b3e72a9-8a31-49a6-90cd-7073f9419f43")
	Nov 28 04:29:39 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:39.846040    1618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3e72a9-8a31-49a6-90cd-7073f9419f43-minikube-ingress-dns-token-hpvlt" (OuterVolumeSpecName: "minikube-ingress-dns-token-hpvlt") pod "2b3e72a9-8a31-49a6-90cd-7073f9419f43" (UID: "2b3e72a9-8a31-49a6-90cd-7073f9419f43"). InnerVolumeSpecName "minikube-ingress-dns-token-hpvlt". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 04:29:39 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:39.942152    1618 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-hpvlt" (UniqueName: "kubernetes.io/secret/2b3e72a9-8a31-49a6-90cd-7073f9419f43-minikube-ingress-dns-token-hpvlt") on node "ingress-addon-legacy-120112" DevicePath ""
	Nov 28 04:29:41 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:41.214730    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 715cf19d697109a1ea50410feb7fdd1e0e3fc7865ca59c4c19b29c63b7e5200e
	Nov 28 04:29:41 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:41.917317    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 715cf19d697109a1ea50410feb7fdd1e0e3fc7865ca59c4c19b29c63b7e5200e
	Nov 28 04:29:41 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:41.917548    1618 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 57ebf8f8fd8fa2f4db7ea742513b1f88675861b9cef8af23b16271889318d6e9
	Nov 28 04:29:41 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:41.917792    1618 pod_workers.go:191] Error syncing pod 703f0ebf-57c2-46e4-a96c-3dfd1725913f ("hello-world-app-5f5d8b66bb-nz5q2_default(703f0ebf-57c2-46e4-a96c-3dfd1725913f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-nz5q2_default(703f0ebf-57c2-46e4-a96c-3dfd1725913f)"
	Nov 28 04:29:42 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:42.961257    1618 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-72kkf.179baf1398667d6e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-72kkf", UID:"3d14cdc1-9ae0-49df-b23f-84b47355a522", APIVersion:"v1", ResourceVersion:"488", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-120112"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc151790db922e16e, ext:231226078964, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc151790db922e16e, ext:231226078964, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-72kkf.179baf1398667d6e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 28 04:29:42 ingress-addon-legacy-120112 kubelet[1618]: E1128 04:29:42.976923    1618 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-72kkf.179baf1398667d6e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-72kkf", UID:"3d14cdc1-9ae0-49df-b23f-84b47355a522", APIVersion:"v1", ResourceVersion:"488", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-120112"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc151790db922e16e, ext:231226078964, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc151790db9adbe39, ext:231235179455, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-72kkf.179baf1398667d6e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 28 04:29:45 ingress-addon-legacy-120112 kubelet[1618]: W1128 04:29:45.928310    1618 pod_container_deletor.go:77] Container "da4e4c1c5c89e70873f441599daabf799a07f04bcb6510595ac6994fd695bb26" not found in pod's containers
	Nov 28 04:29:47 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:47.163983    1618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3d14cdc1-9ae0-49df-b23f-84b47355a522-webhook-cert") pod "3d14cdc1-9ae0-49df-b23f-84b47355a522" (UID: "3d14cdc1-9ae0-49df-b23f-84b47355a522")
	Nov 28 04:29:47 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:47.164060    1618 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-f6dbp" (UniqueName: "kubernetes.io/secret/3d14cdc1-9ae0-49df-b23f-84b47355a522-ingress-nginx-token-f6dbp") pod "3d14cdc1-9ae0-49df-b23f-84b47355a522" (UID: "3d14cdc1-9ae0-49df-b23f-84b47355a522")
	Nov 28 04:29:47 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:47.170955    1618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d14cdc1-9ae0-49df-b23f-84b47355a522-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "3d14cdc1-9ae0-49df-b23f-84b47355a522" (UID: "3d14cdc1-9ae0-49df-b23f-84b47355a522"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 04:29:47 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:47.171191    1618 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d14cdc1-9ae0-49df-b23f-84b47355a522-ingress-nginx-token-f6dbp" (OuterVolumeSpecName: "ingress-nginx-token-f6dbp") pod "3d14cdc1-9ae0-49df-b23f-84b47355a522" (UID: "3d14cdc1-9ae0-49df-b23f-84b47355a522"). InnerVolumeSpecName "ingress-nginx-token-f6dbp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 04:29:47 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:47.264419    1618 reconciler.go:319] Volume detached for volume "ingress-nginx-token-f6dbp" (UniqueName: "kubernetes.io/secret/3d14cdc1-9ae0-49df-b23f-84b47355a522-ingress-nginx-token-f6dbp") on node "ingress-addon-legacy-120112" DevicePath ""
	Nov 28 04:29:47 ingress-addon-legacy-120112 kubelet[1618]: I1128 04:29:47.264474    1618 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/3d14cdc1-9ae0-49df-b23f-84b47355a522-webhook-cert") on node "ingress-addon-legacy-120112" DevicePath ""
	
	* 
	* ==> storage-provisioner [e43ee0613ed983295669400cc9423aef0928fb539438909a0d9a2744db28c6e1] <==
	* I1128 04:26:32.030089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:26:32.043969       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:26:32.044151       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:26:32.051815       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:26:32.052105       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-120112_ad6b492c-ff48-4f87-9cff-212c1b8f4785!
	I1128 04:26:32.053370       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"703beabd-bd96-42ab-b610-963559e4af94", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-120112_ad6b492c-ff48-4f87-9cff-212c1b8f4785 became leader
	I1128 04:26:32.153174       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-120112_ad6b492c-ff48-4f87-9cff-212c1b8f4785!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-120112 -n ingress-addon-legacy-120112
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-120112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-9h4s8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-9h4s8 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-9h4s8 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (249.421853ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-9h4s8): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-cpvdq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-cpvdq -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-cpvdq -- sh -c "ping -c 1 192.168.58.1": exit status 1 (287.498237ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-cpvdq): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-448128
helpers_test.go:235: (dbg) docker inspect multinode-448128:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188",
	        "Created": "2023-11-28T04:36:28.298290696Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1326803,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:36:28.623471858Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/hostname",
	        "HostsPath": "/var/lib/docker/containers/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/hosts",
	        "LogPath": "/var/lib/docker/containers/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188-json.log",
	        "Name": "/multinode-448128",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-448128:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-448128",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fab0ee750ba52bd6c0c722a69796f1986aabfd3a871bc02e1a48855bd7f17100-init/diff:/var/lib/docker/overlay2/cc610f7b23c869d03809246385f10f80b89207e6d90717a6a4867696f2289751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fab0ee750ba52bd6c0c722a69796f1986aabfd3a871bc02e1a48855bd7f17100/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fab0ee750ba52bd6c0c722a69796f1986aabfd3a871bc02e1a48855bd7f17100/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fab0ee750ba52bd6c0c722a69796f1986aabfd3a871bc02e1a48855bd7f17100/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-448128",
	                "Source": "/var/lib/docker/volumes/multinode-448128/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-448128",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-448128",
	                "name.minikube.sigs.k8s.io": "multinode-448128",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c5bbc7a483dcc7f7d6a223b550aee3798a8cdf8d6d1f00d4fba361642df4609a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34399"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34398"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34395"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34396"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c5bbc7a483dc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-448128": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "883175574ae5",
	                        "multinode-448128"
	                    ],
	                    "NetworkID": "0d78a22dd54636e701489524e1885d93224348757b28c5d00a5642a1dd2db686",
	                    "EndpointID": "433fc147cd4f6c172a2b648fe6f7cb5563d3ad5edd703b4b81b0551963579492",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-448128 -n multinode-448128
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-448128 logs -n 25: (1.621628927s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-935086                           | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-935086 ssh -- ls                    | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-933428                           | mount-start-1-933428 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-935086 ssh -- ls                    | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-935086                           | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	| start   | -p mount-start-2-935086                           | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	| ssh     | mount-start-2-935086 ssh -- ls                    | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-935086                           | mount-start-2-935086 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	| delete  | -p mount-start-1-933428                           | mount-start-1-933428 | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:36 UTC |
	| start   | -p multinode-448128                               | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:36 UTC | 28 Nov 23 04:38 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- apply -f                   | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- rollout                    | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- get pods -o                | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- get pods -o                | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-9h4s8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-cpvdq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-9h4s8 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-cpvdq --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-9h4s8 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-cpvdq -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- get pods -o                | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-9h4s8                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC |                     |
	|         | busybox-5bc68d56bd-9h4s8 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC | 28 Nov 23 04:38 UTC |
	|         | busybox-5bc68d56bd-cpvdq                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-448128 -- exec                       | multinode-448128     | jenkins | v1.32.0 | 28 Nov 23 04:38 UTC |                     |
	|         | busybox-5bc68d56bd-cpvdq -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:36:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:36:22.846585 1326355 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:36:22.846781 1326355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:36:22.846809 1326355 out.go:309] Setting ErrFile to fd 2...
	I1128 04:36:22.846830 1326355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:36:22.847126 1326355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:36:22.847584 1326355 out.go:303] Setting JSON to false
	I1128 04:36:22.848696 1326355 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":26318,"bootTime":1701119865,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:36:22.848806 1326355 start.go:138] virtualization:  
	I1128 04:36:22.851374 1326355 out.go:177] * [multinode-448128] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:36:22.853672 1326355 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:36:22.855812 1326355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:36:22.853849 1326355 notify.go:220] Checking for updates...
	I1128 04:36:22.857868 1326355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:36:22.859633 1326355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:36:22.861194 1326355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:36:22.862788 1326355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:36:22.864794 1326355 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:36:22.889812 1326355 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:36:22.889935 1326355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:36:22.977750 1326355 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-28 04:36:22.96727886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:36:22.977855 1326355 docker.go:295] overlay module found
	I1128 04:36:22.980227 1326355 out.go:177] * Using the docker driver based on user configuration
	I1128 04:36:22.982148 1326355 start.go:298] selected driver: docker
	I1128 04:36:22.982179 1326355 start.go:902] validating driver "docker" against <nil>
	I1128 04:36:22.982194 1326355 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:36:22.982877 1326355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:36:23.051003 1326355 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-28 04:36:23.041554428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:36:23.051173 1326355 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 04:36:23.051425 1326355 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:36:23.053245 1326355 out.go:177] * Using Docker driver with root privileges
	I1128 04:36:23.054951 1326355 cni.go:84] Creating CNI manager for ""
	I1128 04:36:23.054973 1326355 cni.go:136] 0 nodes found, recommending kindnet
	I1128 04:36:23.054984 1326355 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 04:36:23.054999 1326355 start_flags.go:323] config:
	{Name:multinode-448128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:36:23.057285 1326355 out.go:177] * Starting control plane node multinode-448128 in cluster multinode-448128
	I1128 04:36:23.058898 1326355 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:36:23.060781 1326355 out.go:177] * Pulling base image ...
	I1128 04:36:23.062477 1326355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:36:23.062519 1326355 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:36:23.062566 1326355 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1128 04:36:23.062575 1326355 cache.go:56] Caching tarball of preloaded images
	I1128 04:36:23.062697 1326355 preload.go:174] Found /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 04:36:23.062707 1326355 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:36:23.063064 1326355 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/config.json ...
	I1128 04:36:23.063091 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/config.json: {Name:mke0ce54b05ac143d1b8362bc127df4499a1fb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:23.081054 1326355 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 04:36:23.081082 1326355 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 04:36:23.081096 1326355 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:36:23.081138 1326355 start.go:365] acquiring machines lock for multinode-448128: {Name:mkccf54598b1b7c6edac41746bdd63a75506cf3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:36:23.081261 1326355 start.go:369] acquired machines lock for "multinode-448128" in 100.89µs
	I1128 04:36:23.081293 1326355 start.go:93] Provisioning new machine with config: &{Name:multinode-448128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:36:23.081375 1326355 start.go:125] createHost starting for "" (driver="docker")
	I1128 04:36:23.083998 1326355 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1128 04:36:23.084288 1326355 start.go:159] libmachine.API.Create for "multinode-448128" (driver="docker")
	I1128 04:36:23.084321 1326355 client.go:168] LocalClient.Create starting
	I1128 04:36:23.084416 1326355 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:36:23.084457 1326355 main.go:141] libmachine: Decoding PEM data...
	I1128 04:36:23.084479 1326355 main.go:141] libmachine: Parsing certificate...
	I1128 04:36:23.084537 1326355 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:36:23.084561 1326355 main.go:141] libmachine: Decoding PEM data...
	I1128 04:36:23.084576 1326355 main.go:141] libmachine: Parsing certificate...
	I1128 04:36:23.085028 1326355 cli_runner.go:164] Run: docker network inspect multinode-448128 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 04:36:23.102604 1326355 cli_runner.go:211] docker network inspect multinode-448128 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 04:36:23.102686 1326355 network_create.go:281] running [docker network inspect multinode-448128] to gather additional debugging logs...
	I1128 04:36:23.102708 1326355 cli_runner.go:164] Run: docker network inspect multinode-448128
	W1128 04:36:23.120481 1326355 cli_runner.go:211] docker network inspect multinode-448128 returned with exit code 1
	I1128 04:36:23.120516 1326355 network_create.go:284] error running [docker network inspect multinode-448128]: docker network inspect multinode-448128: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-448128 not found
	I1128 04:36:23.120529 1326355 network_create.go:286] output of [docker network inspect multinode-448128]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-448128 not found
	
	** /stderr **
	I1128 04:36:23.120637 1326355 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:36:23.142162 1326355 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-457410d7183c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:60:a5:a2:7c} reservation:<nil>}
	I1128 04:36:23.142504 1326355 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400256af30}
	I1128 04:36:23.142529 1326355 network_create.go:124] attempt to create docker network multinode-448128 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1128 04:36:23.142599 1326355 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-448128 multinode-448128
	I1128 04:36:23.214072 1326355 network_create.go:108] docker network multinode-448128 192.168.58.0/24 created
	I1128 04:36:23.214118 1326355 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-448128" container
	I1128 04:36:23.214192 1326355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:36:23.231072 1326355 cli_runner.go:164] Run: docker volume create multinode-448128 --label name.minikube.sigs.k8s.io=multinode-448128 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:36:23.250540 1326355 oci.go:103] Successfully created a docker volume multinode-448128
	I1128 04:36:23.250623 1326355 cli_runner.go:164] Run: docker run --rm --name multinode-448128-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-448128 --entrypoint /usr/bin/test -v multinode-448128:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 04:36:23.862216 1326355 oci.go:107] Successfully prepared a docker volume multinode-448128
	I1128 04:36:23.862277 1326355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:36:23.862298 1326355 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 04:36:23.862391 1326355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-448128:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 04:36:28.200484 1326355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-448128:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.338052151s)
	I1128 04:36:28.200521 1326355 kic.go:203] duration metric: took 4.338220 seconds to extract preloaded images to volume
	W1128 04:36:28.200700 1326355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:36:28.200833 1326355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:36:28.280971 1326355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-448128 --name multinode-448128 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-448128 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-448128 --network multinode-448128 --ip 192.168.58.2 --volume multinode-448128:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 04:36:28.630972 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Running}}
	I1128 04:36:28.660697 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:36:28.684243 1326355 cli_runner.go:164] Run: docker exec multinode-448128 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:36:28.773088 1326355 oci.go:144] the created container "multinode-448128" has a running status.
	I1128 04:36:28.773128 1326355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa...
	I1128 04:36:29.982450 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1128 04:36:29.982499 1326355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 04:36:30.008344 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:36:30.037648 1326355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 04:36:30.037675 1326355 kic_runner.go:114] Args: [docker exec --privileged multinode-448128 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 04:36:30.116569 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:36:30.145222 1326355 machine.go:88] provisioning docker machine ...
	I1128 04:36:30.145259 1326355 ubuntu.go:169] provisioning hostname "multinode-448128"
	I1128 04:36:30.145340 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:30.170396 1326355 main.go:141] libmachine: Using SSH client type: native
	I1128 04:36:30.170888 1326355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34399 <nil> <nil>}
	I1128 04:36:30.170904 1326355 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-448128 && echo "multinode-448128" | sudo tee /etc/hostname
	I1128 04:36:30.323669 1326355 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-448128
	
	I1128 04:36:30.323774 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:30.342989 1326355 main.go:141] libmachine: Using SSH client type: native
	I1128 04:36:30.343421 1326355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34399 <nil> <nil>}
	I1128 04:36:30.343445 1326355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-448128' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-448128/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-448128' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:36:30.474128 1326355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:36:30.474152 1326355 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:36:30.474182 1326355 ubuntu.go:177] setting up certificates
	I1128 04:36:30.474193 1326355 provision.go:83] configureAuth start
	I1128 04:36:30.474266 1326355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128
	I1128 04:36:30.492745 1326355 provision.go:138] copyHostCerts
	I1128 04:36:30.492794 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:36:30.492828 1326355 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:36:30.492835 1326355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:36:30.492910 1326355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:36:30.492993 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:36:30.493010 1326355 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:36:30.493014 1326355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:36:30.493041 1326355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:36:30.493079 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:36:30.493094 1326355 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:36:30.493098 1326355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:36:30.493120 1326355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:36:30.493161 1326355 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.multinode-448128 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-448128]
	I1128 04:36:30.972470 1326355 provision.go:172] copyRemoteCerts
	I1128 04:36:30.972538 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:36:30.972588 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:30.990829 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:36:31.087674 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 04:36:31.087755 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:36:31.118532 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 04:36:31.118601 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1128 04:36:31.148640 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 04:36:31.148795 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 04:36:31.178238 1326355 provision.go:86] duration metric: configureAuth took 704.030704ms
	I1128 04:36:31.178268 1326355 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:36:31.178507 1326355 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:36:31.178631 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:31.197381 1326355 main.go:141] libmachine: Using SSH client type: native
	I1128 04:36:31.197846 1326355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34399 <nil> <nil>}
	I1128 04:36:31.197872 1326355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:36:31.440675 1326355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:36:31.440777 1326355 machine.go:91] provisioned docker machine in 1.295530296s
	I1128 04:36:31.440795 1326355 client.go:171] LocalClient.Create took 8.356464627s
	I1128 04:36:31.440814 1326355 start.go:167] duration metric: libmachine.API.Create for "multinode-448128" took 8.356527298s
	I1128 04:36:31.440821 1326355 start.go:300] post-start starting for "multinode-448128" (driver="docker")
	I1128 04:36:31.440834 1326355 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:36:31.440907 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:36:31.440949 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:31.460060 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:36:31.555743 1326355 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:36:31.560062 1326355 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1128 04:36:31.560084 1326355 command_runner.go:130] > NAME="Ubuntu"
	I1128 04:36:31.560092 1326355 command_runner.go:130] > VERSION_ID="22.04"
	I1128 04:36:31.560099 1326355 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1128 04:36:31.560105 1326355 command_runner.go:130] > VERSION_CODENAME=jammy
	I1128 04:36:31.560110 1326355 command_runner.go:130] > ID=ubuntu
	I1128 04:36:31.560131 1326355 command_runner.go:130] > ID_LIKE=debian
	I1128 04:36:31.560137 1326355 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1128 04:36:31.560144 1326355 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1128 04:36:31.560151 1326355 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1128 04:36:31.560162 1326355 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1128 04:36:31.560168 1326355 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1128 04:36:31.560227 1326355 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:36:31.560251 1326355 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:36:31.560262 1326355 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:36:31.560270 1326355 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 04:36:31.560280 1326355 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:36:31.560341 1326355 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:36:31.560422 1326355 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:36:31.560429 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> /etc/ssl/certs/12614152.pem
	I1128 04:36:31.560527 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:36:31.571289 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:36:31.600325 1326355 start.go:303] post-start completed in 159.486794ms
	I1128 04:36:31.600780 1326355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128
	I1128 04:36:31.618554 1326355 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/config.json ...
	I1128 04:36:31.618861 1326355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:36:31.618926 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:31.636863 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:36:31.731297 1326355 command_runner.go:130] > 18%!
	(MISSING)I1128 04:36:31.731387 1326355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:36:31.737662 1326355 command_runner.go:130] > 160G
	I1128 04:36:31.737692 1326355 start.go:128] duration metric: createHost completed in 8.656307769s
	I1128 04:36:31.737703 1326355 start.go:83] releasing machines lock for "multinode-448128", held for 8.656428367s
	I1128 04:36:31.737823 1326355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128
	I1128 04:36:31.756882 1326355 ssh_runner.go:195] Run: cat /version.json
	I1128 04:36:31.756944 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:31.757199 1326355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:36:31.757260 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:36:31.775795 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:36:31.788148 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:36:31.991794 1326355 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 04:36:31.995058 1326355 command_runner.go:130] > {"iso_version": "v1.32.1-1699648094-17581", "kicbase_version": "v0.0.42-1700142204-17634", "minikube_version": "v1.32.0", "commit": "6532cab52e164d1138ecb8469e77a57a00b45825"}
	I1128 04:36:31.995200 1326355 ssh_runner.go:195] Run: systemctl --version
	I1128 04:36:32.000653 1326355 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1128 04:36:32.000765 1326355 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1128 04:36:32.001175 1326355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:36:32.150135 1326355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:36:32.155443 1326355 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1128 04:36:32.155479 1326355 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1128 04:36:32.155487 1326355 command_runner.go:130] > Device: 3ah/58d	Inode: 5449282     Links: 1
	I1128 04:36:32.155496 1326355 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 04:36:32.155503 1326355 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1128 04:36:32.155513 1326355 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1128 04:36:32.155519 1326355 command_runner.go:130] > Change: 2023-11-28 04:13:24.237847244 +0000
	I1128 04:36:32.155526 1326355 command_runner.go:130] >  Birth: 2023-11-28 04:13:24.237847244 +0000
	I1128 04:36:32.155775 1326355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:36:32.180353 1326355 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:36:32.180526 1326355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:36:32.223518 1326355 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1128 04:36:32.223605 1326355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1128 04:36:32.223636 1326355 start.go:472] detecting cgroup driver to use...
	I1128 04:36:32.223696 1326355 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:36:32.223773 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:36:32.244799 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:36:32.258310 1326355 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:36:32.258421 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:36:32.275832 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:36:32.293519 1326355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:36:32.392438 1326355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:36:32.409750 1326355 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1128 04:36:32.498210 1326355 docker.go:219] disabling docker service ...
	I1128 04:36:32.498320 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:36:32.520487 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:36:32.534924 1326355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:36:32.628768 1326355 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1128 04:36:32.628838 1326355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:36:32.733327 1326355 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1128 04:36:32.733405 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:36:32.746567 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:36:32.765163 1326355 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 04:36:32.767437 1326355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:36:32.767501 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:36:32.779279 1326355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:36:32.779361 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:36:32.791634 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:36:32.803839 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:36:32.816158 1326355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:36:32.827512 1326355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:36:32.836771 1326355 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1128 04:36:32.838175 1326355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:36:32.848646 1326355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:36:32.953311 1326355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:36:33.086929 1326355 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:36:33.087049 1326355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:36:33.092301 1326355 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 04:36:33.092337 1326355 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 04:36:33.092353 1326355 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1128 04:36:33.092363 1326355 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 04:36:33.092372 1326355 command_runner.go:130] > Access: 2023-11-28 04:36:33.070612858 +0000
	I1128 04:36:33.092382 1326355 command_runner.go:130] > Modify: 2023-11-28 04:36:33.070612858 +0000
	I1128 04:36:33.092388 1326355 command_runner.go:130] > Change: 2023-11-28 04:36:33.070612858 +0000
	I1128 04:36:33.092398 1326355 command_runner.go:130] >  Birth: -
	I1128 04:36:33.092416 1326355 start.go:540] Will wait 60s for crictl version
	I1128 04:36:33.092488 1326355 ssh_runner.go:195] Run: which crictl
	I1128 04:36:33.097456 1326355 command_runner.go:130] > /usr/bin/crictl
	I1128 04:36:33.097552 1326355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:36:33.140075 1326355 command_runner.go:130] > Version:  0.1.0
	I1128 04:36:33.140319 1326355 command_runner.go:130] > RuntimeName:  cri-o
	I1128 04:36:33.140524 1326355 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1128 04:36:33.140736 1326355 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 04:36:33.143520 1326355 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 04:36:33.143610 1326355 ssh_runner.go:195] Run: crio --version
	I1128 04:36:33.190208 1326355 command_runner.go:130] > crio version 1.24.6
	I1128 04:36:33.190230 1326355 command_runner.go:130] > Version:          1.24.6
	I1128 04:36:33.190239 1326355 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1128 04:36:33.190245 1326355 command_runner.go:130] > GitTreeState:     clean
	I1128 04:36:33.190252 1326355 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1128 04:36:33.190258 1326355 command_runner.go:130] > GoVersion:        go1.18.2
	I1128 04:36:33.190263 1326355 command_runner.go:130] > Compiler:         gc
	I1128 04:36:33.190268 1326355 command_runner.go:130] > Platform:         linux/arm64
	I1128 04:36:33.190274 1326355 command_runner.go:130] > Linkmode:         dynamic
	I1128 04:36:33.190288 1326355 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 04:36:33.190300 1326355 command_runner.go:130] > SeccompEnabled:   true
	I1128 04:36:33.190305 1326355 command_runner.go:130] > AppArmorEnabled:  false
	I1128 04:36:33.192510 1326355 ssh_runner.go:195] Run: crio --version
	I1128 04:36:33.241180 1326355 command_runner.go:130] > crio version 1.24.6
	I1128 04:36:33.241244 1326355 command_runner.go:130] > Version:          1.24.6
	I1128 04:36:33.241259 1326355 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1128 04:36:33.241266 1326355 command_runner.go:130] > GitTreeState:     clean
	I1128 04:36:33.241276 1326355 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1128 04:36:33.241282 1326355 command_runner.go:130] > GoVersion:        go1.18.2
	I1128 04:36:33.241287 1326355 command_runner.go:130] > Compiler:         gc
	I1128 04:36:33.241293 1326355 command_runner.go:130] > Platform:         linux/arm64
	I1128 04:36:33.241306 1326355 command_runner.go:130] > Linkmode:         dynamic
	I1128 04:36:33.241320 1326355 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 04:36:33.241330 1326355 command_runner.go:130] > SeccompEnabled:   true
	I1128 04:36:33.241338 1326355 command_runner.go:130] > AppArmorEnabled:  false
	I1128 04:36:33.244847 1326355 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1128 04:36:33.246688 1326355 cli_runner.go:164] Run: docker network inspect multinode-448128 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:36:33.264240 1326355 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1128 04:36:33.268616 1326355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:36:33.281871 1326355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:36:33.281942 1326355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:36:33.348252 1326355 command_runner.go:130] > {
	I1128 04:36:33.348270 1326355 command_runner.go:130] >   "images": [
	I1128 04:36:33.348275 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348285 1326355 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1128 04:36:33.348291 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348303 1326355 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1128 04:36:33.348308 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348314 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348326 1326355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1128 04:36:33.348335 1326355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1128 04:36:33.348340 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348345 1326355 command_runner.go:130] >       "size": "60867618",
	I1128 04:36:33.348350 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.348355 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348365 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348372 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348403 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348407 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348415 1326355 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1128 04:36:33.348420 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348426 1326355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1128 04:36:33.348431 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348436 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348445 1326355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1128 04:36:33.348455 1326355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1128 04:36:33.348459 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348467 1326355 command_runner.go:130] >       "size": "29037500",
	I1128 04:36:33.348472 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.348477 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348482 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348487 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348491 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348496 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348505 1326355 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1128 04:36:33.348512 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348518 1326355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1128 04:36:33.348523 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348528 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348537 1326355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1128 04:36:33.348546 1326355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1128 04:36:33.348551 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348556 1326355 command_runner.go:130] >       "size": "51393451",
	I1128 04:36:33.348561 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.348566 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348570 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348577 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348581 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348585 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348593 1326355 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1128 04:36:33.348598 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348604 1326355 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1128 04:36:33.348610 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348615 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348624 1326355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1128 04:36:33.348633 1326355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1128 04:36:33.348648 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348654 1326355 command_runner.go:130] >       "size": "182203183",
	I1128 04:36:33.348682 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.348687 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.348692 1326355 command_runner.go:130] >       },
	I1128 04:36:33.348697 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348701 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348706 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348711 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348715 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348722 1326355 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1128 04:36:33.348727 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348733 1326355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1128 04:36:33.348738 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348744 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348753 1326355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1128 04:36:33.348762 1326355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1128 04:36:33.348767 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348772 1326355 command_runner.go:130] >       "size": "121119694",
	I1128 04:36:33.348776 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.348781 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.348785 1326355 command_runner.go:130] >       },
	I1128 04:36:33.348791 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348796 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348802 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348806 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348811 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348818 1326355 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1128 04:36:33.348823 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348830 1326355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1128 04:36:33.348835 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348840 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348852 1326355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1128 04:36:33.348861 1326355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1128 04:36:33.348866 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348872 1326355 command_runner.go:130] >       "size": "117252916",
	I1128 04:36:33.348876 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.348881 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.348885 1326355 command_runner.go:130] >       },
	I1128 04:36:33.348890 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348895 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348900 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348904 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348908 1326355 command_runner.go:130] >     {
	I1128 04:36:33.348916 1326355 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1128 04:36:33.348920 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.348927 1326355 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1128 04:36:33.348931 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348936 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.348945 1326355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1128 04:36:33.348955 1326355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1128 04:36:33.348960 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.348965 1326355 command_runner.go:130] >       "size": "69992343",
	I1128 04:36:33.348969 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.348974 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.348979 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.348984 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.348988 1326355 command_runner.go:130] >     },
	I1128 04:36:33.348992 1326355 command_runner.go:130] >     {
	I1128 04:36:33.349000 1326355 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1128 04:36:33.349004 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.349010 1326355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1128 04:36:33.349015 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.349019 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.349059 1326355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1128 04:36:33.349069 1326355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1128 04:36:33.349073 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.349078 1326355 command_runner.go:130] >       "size": "59253556",
	I1128 04:36:33.349085 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.349090 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.349095 1326355 command_runner.go:130] >       },
	I1128 04:36:33.349100 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.349104 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.349109 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.349113 1326355 command_runner.go:130] >     },
	I1128 04:36:33.349117 1326355 command_runner.go:130] >     {
	I1128 04:36:33.349125 1326355 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1128 04:36:33.349130 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.349135 1326355 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 04:36:33.349140 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.349144 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.349153 1326355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1128 04:36:33.349163 1326355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1128 04:36:33.349167 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.349172 1326355 command_runner.go:130] >       "size": "520014",
	I1128 04:36:33.349177 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.349184 1326355 command_runner.go:130] >         "value": "65535"
	I1128 04:36:33.349188 1326355 command_runner.go:130] >       },
	I1128 04:36:33.349193 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.349198 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.349203 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.349207 1326355 command_runner.go:130] >     }
	I1128 04:36:33.349211 1326355 command_runner.go:130] >   ]
	I1128 04:36:33.349215 1326355 command_runner.go:130] > }
	I1128 04:36:33.350696 1326355 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:36:33.350714 1326355 crio.go:415] Images already preloaded, skipping extraction
	I1128 04:36:33.350771 1326355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:36:33.390852 1326355 command_runner.go:130] > {
	I1128 04:36:33.390870 1326355 command_runner.go:130] >   "images": [
	I1128 04:36:33.390875 1326355 command_runner.go:130] >     {
	I1128 04:36:33.390885 1326355 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1128 04:36:33.390891 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.390898 1326355 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1128 04:36:33.390903 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.390909 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.390919 1326355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1128 04:36:33.390929 1326355 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1128 04:36:33.390933 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.390939 1326355 command_runner.go:130] >       "size": "60867618",
	I1128 04:36:33.390943 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.390949 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.390963 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.390968 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.390975 1326355 command_runner.go:130] >     },
	I1128 04:36:33.390980 1326355 command_runner.go:130] >     {
	I1128 04:36:33.390988 1326355 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1128 04:36:33.390993 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.390999 1326355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1128 04:36:33.391004 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391009 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391018 1326355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1128 04:36:33.391028 1326355 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1128 04:36:33.391032 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391040 1326355 command_runner.go:130] >       "size": "29037500",
	I1128 04:36:33.391045 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.391050 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391056 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391061 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391065 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391069 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391077 1326355 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1128 04:36:33.391082 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391090 1326355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1128 04:36:33.391095 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391100 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391109 1326355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1128 04:36:33.391118 1326355 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1128 04:36:33.391123 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391128 1326355 command_runner.go:130] >       "size": "51393451",
	I1128 04:36:33.391133 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.391138 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391143 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391149 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391153 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391157 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391165 1326355 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1128 04:36:33.391169 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391175 1326355 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1128 04:36:33.391180 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391185 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391195 1326355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1128 04:36:33.391204 1326355 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1128 04:36:33.391213 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391218 1326355 command_runner.go:130] >       "size": "182203183",
	I1128 04:36:33.391223 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.391228 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.391232 1326355 command_runner.go:130] >       },
	I1128 04:36:33.391237 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391242 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391247 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391251 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391255 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391262 1326355 command_runner.go:130] >       "id": "04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419",
	I1128 04:36:33.391267 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391274 1326355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1128 04:36:33.391278 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391283 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391292 1326355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb",
	I1128 04:36:33.391303 1326355 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"
	I1128 04:36:33.391308 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391313 1326355 command_runner.go:130] >       "size": "121119694",
	I1128 04:36:33.391318 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.391322 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.391327 1326355 command_runner.go:130] >       },
	I1128 04:36:33.391332 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391337 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391341 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391346 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391350 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391357 1326355 command_runner.go:130] >       "id": "9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b",
	I1128 04:36:33.391362 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391369 1326355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1128 04:36:33.391373 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391378 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391387 1326355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1128 04:36:33.391399 1326355 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"
	I1128 04:36:33.391406 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391412 1326355 command_runner.go:130] >       "size": "117252916",
	I1128 04:36:33.391416 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.391421 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.391425 1326355 command_runner.go:130] >       },
	I1128 04:36:33.391430 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391435 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391440 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391444 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391448 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391456 1326355 command_runner.go:130] >       "id": "3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39",
	I1128 04:36:33.391461 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391467 1326355 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1128 04:36:33.391471 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391476 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391485 1326355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68",
	I1128 04:36:33.391494 1326355 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1128 04:36:33.391498 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391511 1326355 command_runner.go:130] >       "size": "69992343",
	I1128 04:36:33.391516 1326355 command_runner.go:130] >       "uid": null,
	I1128 04:36:33.391521 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391526 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391531 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391536 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391540 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391548 1326355 command_runner.go:130] >       "id": "05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54",
	I1128 04:36:33.391553 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391559 1326355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1128 04:36:33.391564 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391569 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391600 1326355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1128 04:36:33.391610 1326355 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"
	I1128 04:36:33.391614 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391619 1326355 command_runner.go:130] >       "size": "59253556",
	I1128 04:36:33.391624 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.391628 1326355 command_runner.go:130] >         "value": "0"
	I1128 04:36:33.391634 1326355 command_runner.go:130] >       },
	I1128 04:36:33.391639 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391643 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391648 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391652 1326355 command_runner.go:130] >     },
	I1128 04:36:33.391656 1326355 command_runner.go:130] >     {
	I1128 04:36:33.391664 1326355 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1128 04:36:33.391668 1326355 command_runner.go:130] >       "repoTags": [
	I1128 04:36:33.391674 1326355 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 04:36:33.391678 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391683 1326355 command_runner.go:130] >       "repoDigests": [
	I1128 04:36:33.391691 1326355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1128 04:36:33.391701 1326355 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1128 04:36:33.391705 1326355 command_runner.go:130] >       ],
	I1128 04:36:33.391710 1326355 command_runner.go:130] >       "size": "520014",
	I1128 04:36:33.391715 1326355 command_runner.go:130] >       "uid": {
	I1128 04:36:33.391720 1326355 command_runner.go:130] >         "value": "65535"
	I1128 04:36:33.391724 1326355 command_runner.go:130] >       },
	I1128 04:36:33.391730 1326355 command_runner.go:130] >       "username": "",
	I1128 04:36:33.391735 1326355 command_runner.go:130] >       "spec": null,
	I1128 04:36:33.391740 1326355 command_runner.go:130] >       "pinned": false
	I1128 04:36:33.391745 1326355 command_runner.go:130] >     }
	I1128 04:36:33.391749 1326355 command_runner.go:130] >   ]
	I1128 04:36:33.391752 1326355 command_runner.go:130] > }
	I1128 04:36:33.394555 1326355 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:36:33.394577 1326355 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:36:33.394653 1326355 ssh_runner.go:195] Run: crio config
	I1128 04:36:33.442814 1326355 command_runner.go:130] ! time="2023-11-28 04:36:33.442510041Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1128 04:36:33.443108 1326355 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 04:36:33.475042 1326355 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 04:36:33.475116 1326355 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 04:36:33.475153 1326355 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 04:36:33.475182 1326355 command_runner.go:130] > #
	I1128 04:36:33.475206 1326355 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 04:36:33.475241 1326355 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 04:36:33.475268 1326355 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 04:36:33.475294 1326355 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 04:36:33.475333 1326355 command_runner.go:130] > # reload'.
	I1128 04:36:33.475363 1326355 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 04:36:33.475386 1326355 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 04:36:33.475421 1326355 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 04:36:33.475446 1326355 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 04:36:33.475467 1326355 command_runner.go:130] > [crio]
	I1128 04:36:33.475503 1326355 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 04:36:33.475529 1326355 command_runner.go:130] > # containers images, in this directory.
	I1128 04:36:33.475559 1326355 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1128 04:36:33.475593 1326355 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 04:36:33.475613 1326355 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1128 04:36:33.475635 1326355 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 04:36:33.475667 1326355 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 04:36:33.475691 1326355 command_runner.go:130] > # storage_driver = "vfs"
	I1128 04:36:33.475714 1326355 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 04:36:33.475746 1326355 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 04:36:33.475769 1326355 command_runner.go:130] > # storage_option = [
	I1128 04:36:33.475787 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.475826 1326355 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 04:36:33.475853 1326355 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 04:36:33.475874 1326355 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 04:36:33.475908 1326355 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 04:36:33.475934 1326355 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 04:36:33.475953 1326355 command_runner.go:130] > # always happen on a node reboot
	I1128 04:36:33.475986 1326355 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 04:36:33.476010 1326355 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 04:36:33.476033 1326355 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 04:36:33.476078 1326355 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 04:36:33.476104 1326355 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 04:36:33.476142 1326355 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 04:36:33.476168 1326355 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 04:36:33.476187 1326355 command_runner.go:130] > # internal_wipe = true
	I1128 04:36:33.476223 1326355 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 04:36:33.476248 1326355 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 04:36:33.476270 1326355 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 04:36:33.476306 1326355 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 04:36:33.476339 1326355 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 04:36:33.476380 1326355 command_runner.go:130] > [crio.api]
	I1128 04:36:33.476403 1326355 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 04:36:33.476421 1326355 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 04:36:33.476455 1326355 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 04:36:33.476477 1326355 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 04:36:33.476497 1326355 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 04:36:33.476518 1326355 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 04:36:33.476552 1326355 command_runner.go:130] > # stream_port = "0"
	I1128 04:36:33.476572 1326355 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 04:36:33.476591 1326355 command_runner.go:130] > # stream_enable_tls = false
	I1128 04:36:33.476626 1326355 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 04:36:33.476650 1326355 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 04:36:33.476698 1326355 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 04:36:33.476734 1326355 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 04:36:33.476756 1326355 command_runner.go:130] > # minutes.
	I1128 04:36:33.476776 1326355 command_runner.go:130] > # stream_tls_cert = ""
	I1128 04:36:33.476811 1326355 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 04:36:33.476835 1326355 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 04:36:33.476861 1326355 command_runner.go:130] > # stream_tls_key = ""
	I1128 04:36:33.476897 1326355 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 04:36:33.476921 1326355 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 04:36:33.476943 1326355 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 04:36:33.476979 1326355 command_runner.go:130] > # stream_tls_ca = ""
	I1128 04:36:33.477009 1326355 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 04:36:33.477029 1326355 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1128 04:36:33.477066 1326355 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 04:36:33.477090 1326355 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1128 04:36:33.477154 1326355 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 04:36:33.477181 1326355 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 04:36:33.477200 1326355 command_runner.go:130] > [crio.runtime]
	I1128 04:36:33.477231 1326355 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 04:36:33.477255 1326355 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 04:36:33.477274 1326355 command_runner.go:130] > # "nofile=1024:2048"
	I1128 04:36:33.477313 1326355 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 04:36:33.477341 1326355 command_runner.go:130] > # default_ulimits = [
	I1128 04:36:33.477359 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.477398 1326355 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 04:36:33.477423 1326355 command_runner.go:130] > # no_pivot = false
	I1128 04:36:33.477444 1326355 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 04:36:33.477478 1326355 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 04:36:33.477503 1326355 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 04:36:33.477526 1326355 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 04:36:33.477559 1326355 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 04:36:33.477585 1326355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 04:36:33.477605 1326355 command_runner.go:130] > # conmon = ""
	I1128 04:36:33.477641 1326355 command_runner.go:130] > # Cgroup setting for conmon
	I1128 04:36:33.477666 1326355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 04:36:33.477686 1326355 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 04:36:33.477721 1326355 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 04:36:33.477745 1326355 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 04:36:33.477768 1326355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 04:36:33.477801 1326355 command_runner.go:130] > # conmon_env = [
	I1128 04:36:33.477823 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.477844 1326355 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 04:36:33.477883 1326355 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 04:36:33.477909 1326355 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 04:36:33.477927 1326355 command_runner.go:130] > # default_env = [
	I1128 04:36:33.477961 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.477986 1326355 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 04:36:33.478006 1326355 command_runner.go:130] > # selinux = false
	I1128 04:36:33.478043 1326355 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 04:36:33.478067 1326355 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 04:36:33.478089 1326355 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 04:36:33.478121 1326355 command_runner.go:130] > # seccomp_profile = ""
	I1128 04:36:33.478148 1326355 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 04:36:33.478169 1326355 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 04:36:33.478205 1326355 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 04:36:33.478228 1326355 command_runner.go:130] > # which might increase security.
	I1128 04:36:33.478248 1326355 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1128 04:36:33.478285 1326355 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 04:36:33.478310 1326355 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 04:36:33.478332 1326355 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 04:36:33.478374 1326355 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 04:36:33.478399 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:36:33.478419 1326355 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 04:36:33.478454 1326355 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 04:36:33.478477 1326355 command_runner.go:130] > # the cgroup blockio controller.
	I1128 04:36:33.478496 1326355 command_runner.go:130] > # blockio_config_file = ""
	I1128 04:36:33.478532 1326355 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 04:36:33.478555 1326355 command_runner.go:130] > # irqbalance daemon.
	I1128 04:36:33.478575 1326355 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 04:36:33.478609 1326355 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 04:36:33.478632 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:36:33.478650 1326355 command_runner.go:130] > # rdt_config_file = ""
	I1128 04:36:33.478684 1326355 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 04:36:33.478706 1326355 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 04:36:33.478726 1326355 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 04:36:33.478746 1326355 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 04:36:33.478784 1326355 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 04:36:33.478805 1326355 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 04:36:33.478844 1326355 command_runner.go:130] > # will be added.
	I1128 04:36:33.478867 1326355 command_runner.go:130] > # default_capabilities = [
	I1128 04:36:33.478886 1326355 command_runner.go:130] > # 	"CHOWN",
	I1128 04:36:33.478917 1326355 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 04:36:33.478939 1326355 command_runner.go:130] > # 	"FSETID",
	I1128 04:36:33.478956 1326355 command_runner.go:130] > # 	"FOWNER",
	I1128 04:36:33.478973 1326355 command_runner.go:130] > # 	"SETGID",
	I1128 04:36:33.478992 1326355 command_runner.go:130] > # 	"SETUID",
	I1128 04:36:33.479024 1326355 command_runner.go:130] > # 	"SETPCAP",
	I1128 04:36:33.479047 1326355 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 04:36:33.479066 1326355 command_runner.go:130] > # 	"KILL",
	I1128 04:36:33.479086 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.479109 1326355 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1128 04:36:33.479148 1326355 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1128 04:36:33.479167 1326355 command_runner.go:130] > # add_inheritable_capabilities = true
	I1128 04:36:33.479188 1326355 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 04:36:33.479221 1326355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 04:36:33.479244 1326355 command_runner.go:130] > # default_sysctls = [
	I1128 04:36:33.479274 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.479298 1326355 command_runner.go:130] > # List of devices on the host that a
	I1128 04:36:33.479333 1326355 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 04:36:33.479356 1326355 command_runner.go:130] > # allowed_devices = [
	I1128 04:36:33.479376 1326355 command_runner.go:130] > # 	"/dev/fuse",
	I1128 04:36:33.479395 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.479432 1326355 command_runner.go:130] > # List of additional devices. specified as
	I1128 04:36:33.479491 1326355 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 04:36:33.479512 1326355 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 04:36:33.479542 1326355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 04:36:33.479567 1326355 command_runner.go:130] > # additional_devices = [
	I1128 04:36:33.479585 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.479606 1326355 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 04:36:33.479625 1326355 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 04:36:33.479650 1326355 command_runner.go:130] > # 	"/etc/cdi",
	I1128 04:36:33.479673 1326355 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 04:36:33.479692 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.479729 1326355 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 04:36:33.479753 1326355 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 04:36:33.479772 1326355 command_runner.go:130] > # Defaults to false.
	I1128 04:36:33.479803 1326355 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 04:36:33.479827 1326355 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 04:36:33.479849 1326355 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 04:36:33.479866 1326355 command_runner.go:130] > # hooks_dir = [
	I1128 04:36:33.479885 1326355 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 04:36:33.479911 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.479936 1326355 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 04:36:33.479958 1326355 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 04:36:33.479978 1326355 command_runner.go:130] > # its default mounts from the following two files:
	I1128 04:36:33.479996 1326355 command_runner.go:130] > #
	I1128 04:36:33.480031 1326355 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 04:36:33.480052 1326355 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 04:36:33.480072 1326355 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 04:36:33.480103 1326355 command_runner.go:130] > #
	I1128 04:36:33.480127 1326355 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 04:36:33.480149 1326355 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 04:36:33.480178 1326355 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 04:36:33.480207 1326355 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 04:36:33.480231 1326355 command_runner.go:130] > #
	I1128 04:36:33.480252 1326355 command_runner.go:130] > # default_mounts_file = ""
	I1128 04:36:33.480274 1326355 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 04:36:33.480316 1326355 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 04:36:33.480339 1326355 command_runner.go:130] > # pids_limit = 0
	I1128 04:36:33.480359 1326355 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 04:36:33.480379 1326355 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 04:36:33.480416 1326355 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 04:36:33.480447 1326355 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 04:36:33.480467 1326355 command_runner.go:130] > # log_size_max = -1
	I1128 04:36:33.480489 1326355 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 04:36:33.480518 1326355 command_runner.go:130] > # log_to_journald = false
	I1128 04:36:33.480547 1326355 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 04:36:33.480568 1326355 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 04:36:33.480588 1326355 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 04:36:33.480620 1326355 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 04:36:33.480647 1326355 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 04:36:33.480683 1326355 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 04:36:33.480701 1326355 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 04:36:33.480706 1326355 command_runner.go:130] > # read_only = false
	I1128 04:36:33.480714 1326355 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 04:36:33.480722 1326355 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 04:36:33.480740 1326355 command_runner.go:130] > # live configuration reload.
	I1128 04:36:33.480750 1326355 command_runner.go:130] > # log_level = "info"
	I1128 04:36:33.480757 1326355 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 04:36:33.480763 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:36:33.480773 1326355 command_runner.go:130] > # log_filter = ""
	I1128 04:36:33.480780 1326355 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 04:36:33.480790 1326355 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 04:36:33.480795 1326355 command_runner.go:130] > # separated by comma.
	I1128 04:36:33.480813 1326355 command_runner.go:130] > # uid_mappings = ""
	I1128 04:36:33.480829 1326355 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 04:36:33.480837 1326355 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 04:36:33.480846 1326355 command_runner.go:130] > # separated by comma.
	I1128 04:36:33.480859 1326355 command_runner.go:130] > # gid_mappings = ""
	I1128 04:36:33.480871 1326355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 04:36:33.480878 1326355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 04:36:33.480894 1326355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 04:36:33.480900 1326355 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 04:36:33.480908 1326355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 04:36:33.480919 1326355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 04:36:33.480926 1326355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 04:36:33.480934 1326355 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 04:36:33.480942 1326355 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 04:36:33.480951 1326355 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 04:36:33.480961 1326355 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 04:36:33.480969 1326355 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 04:36:33.480977 1326355 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 04:36:33.480998 1326355 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 04:36:33.481007 1326355 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 04:36:33.481016 1326355 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 04:36:33.481021 1326355 command_runner.go:130] > # drop_infra_ctr = true
	I1128 04:36:33.481031 1326355 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 04:36:33.481041 1326355 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 04:36:33.481050 1326355 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 04:36:33.481058 1326355 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 04:36:33.481065 1326355 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 04:36:33.481074 1326355 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 04:36:33.481079 1326355 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 04:36:33.481087 1326355 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 04:36:33.481095 1326355 command_runner.go:130] > # pinns_path = ""
	I1128 04:36:33.481103 1326355 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 04:36:33.481112 1326355 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 04:36:33.481122 1326355 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 04:36:33.481127 1326355 command_runner.go:130] > # default_runtime = "runc"
	I1128 04:36:33.481136 1326355 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 04:36:33.481145 1326355 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 04:36:33.481160 1326355 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 04:36:33.481167 1326355 command_runner.go:130] > # creation as a file is not desired either.
	I1128 04:36:33.481179 1326355 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 04:36:33.481189 1326355 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 04:36:33.481200 1326355 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 04:36:33.481205 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.481215 1326355 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 04:36:33.481223 1326355 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 04:36:33.481234 1326355 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 04:36:33.481241 1326355 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 04:36:33.481248 1326355 command_runner.go:130] > #
	I1128 04:36:33.481254 1326355 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 04:36:33.481260 1326355 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 04:36:33.481265 1326355 command_runner.go:130] > #  runtime_type = "oci"
	I1128 04:36:33.481274 1326355 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 04:36:33.481285 1326355 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 04:36:33.481290 1326355 command_runner.go:130] > #  allowed_annotations = []
	I1128 04:36:33.481296 1326355 command_runner.go:130] > # Where:
	I1128 04:36:33.481303 1326355 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 04:36:33.481315 1326355 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 04:36:33.481323 1326355 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 04:36:33.481336 1326355 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 04:36:33.481341 1326355 command_runner.go:130] > #   in $PATH.
	I1128 04:36:33.481351 1326355 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 04:36:33.481358 1326355 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 04:36:33.481366 1326355 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 04:36:33.481373 1326355 command_runner.go:130] > #   state.
	I1128 04:36:33.481381 1326355 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 04:36:33.481390 1326355 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 04:36:33.481398 1326355 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 04:36:33.481407 1326355 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 04:36:33.481415 1326355 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 04:36:33.481426 1326355 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 04:36:33.481432 1326355 command_runner.go:130] > #   The currently recognized values are:
	I1128 04:36:33.481443 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 04:36:33.481452 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 04:36:33.481461 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 04:36:33.481471 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 04:36:33.481480 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 04:36:33.481493 1326355 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 04:36:33.481504 1326355 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 04:36:33.481512 1326355 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 04:36:33.481521 1326355 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 04:36:33.481526 1326355 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 04:36:33.481532 1326355 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1128 04:36:33.481539 1326355 command_runner.go:130] > runtime_type = "oci"
	I1128 04:36:33.481544 1326355 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 04:36:33.481551 1326355 command_runner.go:130] > runtime_config_path = ""
	I1128 04:36:33.481556 1326355 command_runner.go:130] > monitor_path = ""
	I1128 04:36:33.481564 1326355 command_runner.go:130] > monitor_cgroup = ""
	I1128 04:36:33.481569 1326355 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 04:36:33.481608 1326355 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 04:36:33.481617 1326355 command_runner.go:130] > # running containers
	I1128 04:36:33.481622 1326355 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 04:36:33.481630 1326355 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 04:36:33.481640 1326355 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 04:36:33.481649 1326355 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 04:36:33.481657 1326355 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 04:36:33.481666 1326355 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 04:36:33.481672 1326355 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 04:36:33.481679 1326355 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 04:36:33.481688 1326355 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 04:36:33.481693 1326355 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 04:36:33.481704 1326355 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 04:36:33.481711 1326355 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 04:36:33.481719 1326355 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 04:36:33.481730 1326355 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 04:36:33.481746 1326355 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 04:36:33.481755 1326355 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 04:36:33.481766 1326355 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 04:36:33.481779 1326355 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 04:36:33.481787 1326355 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 04:36:33.481799 1326355 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 04:36:33.481803 1326355 command_runner.go:130] > # Example:
	I1128 04:36:33.481809 1326355 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 04:36:33.481819 1326355 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 04:36:33.481829 1326355 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 04:36:33.481835 1326355 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 04:36:33.481842 1326355 command_runner.go:130] > # cpuset = 0
	I1128 04:36:33.481847 1326355 command_runner.go:130] > # cpushares = "0-1"
	I1128 04:36:33.481851 1326355 command_runner.go:130] > # Where:
	I1128 04:36:33.481859 1326355 command_runner.go:130] > # The workload name is workload-type.
	I1128 04:36:33.481867 1326355 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 04:36:33.481876 1326355 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 04:36:33.481883 1326355 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 04:36:33.481893 1326355 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 04:36:33.481904 1326355 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 04:36:33.481909 1326355 command_runner.go:130] > # 
	I1128 04:36:33.481919 1326355 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 04:36:33.481926 1326355 command_runner.go:130] > #
	I1128 04:36:33.481933 1326355 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 04:36:33.481943 1326355 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 04:36:33.481953 1326355 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 04:36:33.481962 1326355 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 04:36:33.481972 1326355 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 04:36:33.481977 1326355 command_runner.go:130] > [crio.image]
	I1128 04:36:33.481987 1326355 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 04:36:33.481992 1326355 command_runner.go:130] > # default_transport = "docker://"
	I1128 04:36:33.482000 1326355 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 04:36:33.482010 1326355 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 04:36:33.482015 1326355 command_runner.go:130] > # global_auth_file = ""
	I1128 04:36:33.482023 1326355 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 04:36:33.482030 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:36:33.482038 1326355 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 04:36:33.482046 1326355 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 04:36:33.482055 1326355 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 04:36:33.482061 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:36:33.482069 1326355 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 04:36:33.482076 1326355 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 04:36:33.482083 1326355 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 04:36:33.482093 1326355 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 04:36:33.482104 1326355 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 04:36:33.482112 1326355 command_runner.go:130] > # pause_command = "/pause"
	I1128 04:36:33.482120 1326355 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 04:36:33.482130 1326355 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 04:36:33.482137 1326355 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 04:36:33.482148 1326355 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 04:36:33.482155 1326355 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 04:36:33.482162 1326355 command_runner.go:130] > # signature_policy = ""
	I1128 04:36:33.482172 1326355 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 04:36:33.482183 1326355 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 04:36:33.482188 1326355 command_runner.go:130] > # changing them here.
	I1128 04:36:33.482196 1326355 command_runner.go:130] > # insecure_registries = [
	I1128 04:36:33.482201 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.482208 1326355 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 04:36:33.482218 1326355 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 04:36:33.482223 1326355 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 04:36:33.482231 1326355 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 04:36:33.482239 1326355 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 04:36:33.482247 1326355 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 04:36:33.482252 1326355 command_runner.go:130] > # CNI plugins.
	I1128 04:36:33.482260 1326355 command_runner.go:130] > [crio.network]
	I1128 04:36:33.482270 1326355 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 04:36:33.482279 1326355 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 04:36:33.482284 1326355 command_runner.go:130] > # cni_default_network = ""
	I1128 04:36:33.482293 1326355 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 04:36:33.482301 1326355 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 04:36:33.482307 1326355 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 04:36:33.482314 1326355 command_runner.go:130] > # plugin_dirs = [
	I1128 04:36:33.482319 1326355 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 04:36:33.482323 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.482332 1326355 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 04:36:33.482337 1326355 command_runner.go:130] > [crio.metrics]
	I1128 04:36:33.482348 1326355 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 04:36:33.482353 1326355 command_runner.go:130] > # enable_metrics = false
	I1128 04:36:33.482361 1326355 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 04:36:33.482370 1326355 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 04:36:33.482379 1326355 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 04:36:33.482390 1326355 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 04:36:33.482397 1326355 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 04:36:33.482405 1326355 command_runner.go:130] > # metrics_collectors = [
	I1128 04:36:33.482410 1326355 command_runner.go:130] > # 	"operations",
	I1128 04:36:33.482419 1326355 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 04:36:33.482427 1326355 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 04:36:33.482432 1326355 command_runner.go:130] > # 	"operations_errors",
	I1128 04:36:33.482437 1326355 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 04:36:33.482444 1326355 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 04:36:33.482450 1326355 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 04:36:33.482457 1326355 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 04:36:33.482463 1326355 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 04:36:33.482468 1326355 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 04:36:33.482476 1326355 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 04:36:33.482480 1326355 command_runner.go:130] > # 	"containers_oom_total",
	I1128 04:36:33.482485 1326355 command_runner.go:130] > # 	"containers_oom",
	I1128 04:36:33.482493 1326355 command_runner.go:130] > # 	"processes_defunct",
	I1128 04:36:33.482499 1326355 command_runner.go:130] > # 	"operations_total",
	I1128 04:36:33.482504 1326355 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 04:36:33.482514 1326355 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 04:36:33.482520 1326355 command_runner.go:130] > # 	"operations_errors_total",
	I1128 04:36:33.482525 1326355 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 04:36:33.482535 1326355 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 04:36:33.482541 1326355 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 04:36:33.482549 1326355 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 04:36:33.482554 1326355 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 04:36:33.482560 1326355 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 04:36:33.482566 1326355 command_runner.go:130] > # ]
	I1128 04:36:33.482573 1326355 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 04:36:33.482578 1326355 command_runner.go:130] > # metrics_port = 9090
	I1128 04:36:33.482586 1326355 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 04:36:33.482591 1326355 command_runner.go:130] > # metrics_socket = ""
	I1128 04:36:33.482603 1326355 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 04:36:33.482610 1326355 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 04:36:33.482618 1326355 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 04:36:33.482629 1326355 command_runner.go:130] > # certificate on any modification event.
	I1128 04:36:33.482634 1326355 command_runner.go:130] > # metrics_cert = ""
	I1128 04:36:33.482643 1326355 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 04:36:33.482649 1326355 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 04:36:33.482656 1326355 command_runner.go:130] > # metrics_key = ""
	I1128 04:36:33.482663 1326355 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 04:36:33.482669 1326355 command_runner.go:130] > [crio.tracing]
	I1128 04:36:33.482677 1326355 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 04:36:33.482686 1326355 command_runner.go:130] > # enable_tracing = false
	I1128 04:36:33.482692 1326355 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 04:36:33.482698 1326355 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 04:36:33.482707 1326355 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 04:36:33.482715 1326355 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 04:36:33.482722 1326355 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 04:36:33.482729 1326355 command_runner.go:130] > [crio.stats]
	I1128 04:36:33.482736 1326355 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 04:36:33.482744 1326355 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 04:36:33.482752 1326355 command_runner.go:130] > # stats_collection_period = 0
	I1128 04:36:33.482851 1326355 cni.go:84] Creating CNI manager for ""
	I1128 04:36:33.482864 1326355 cni.go:136] 1 nodes found, recommending kindnet
	I1128 04:36:33.482887 1326355 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:36:33.482909 1326355 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-448128 NodeName:multinode-448128 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:36:33.483067 1326355 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-448128"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:36:33.483130 1326355 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-448128 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:36:33.483204 1326355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:36:33.492968 1326355 command_runner.go:130] > kubeadm
	I1128 04:36:33.492990 1326355 command_runner.go:130] > kubectl
	I1128 04:36:33.492996 1326355 command_runner.go:130] > kubelet
	I1128 04:36:33.494240 1326355 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:36:33.494310 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:36:33.504985 1326355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1128 04:36:33.526403 1326355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:36:33.548058 1326355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1128 04:36:33.569418 1326355 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1128 04:36:33.573815 1326355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:36:33.587781 1326355 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128 for IP: 192.168.58.2
	I1128 04:36:33.587815 1326355 certs.go:190] acquiring lock for shared ca certs: {Name:mka7cf71bac87c390cad9bb03b67c849db7103ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:33.587960 1326355 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key
	I1128 04:36:33.588008 1326355 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key
	I1128 04:36:33.588057 1326355 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key
	I1128 04:36:33.588072 1326355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt with IP's: []
	I1128 04:36:33.833566 1326355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt ...
	I1128 04:36:33.833597 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt: {Name:mk7fd49fc73bce15bd1606a05e0615b9797d4461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:33.833799 1326355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key ...
	I1128 04:36:33.833812 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key: {Name:mk8ee5e47d8a1ef96a1459f44ec93cea503ac6f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:33.833904 1326355 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key.cee25041
	I1128 04:36:33.833922 1326355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 04:36:34.087920 1326355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt.cee25041 ...
	I1128 04:36:34.087957 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt.cee25041: {Name:mk5646cd17708267a8bb8732192ce0bef1f0c6e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:34.088155 1326355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key.cee25041 ...
	I1128 04:36:34.088168 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key.cee25041: {Name:mkbbb666ea05a457cc708fe2f9deae97567474ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:34.088252 1326355 certs.go:337] copying /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt
	I1128 04:36:34.088351 1326355 certs.go:341] copying /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key
	I1128 04:36:34.088418 1326355 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.key
	I1128 04:36:34.088440 1326355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.crt with IP's: []
	I1128 04:36:34.406836 1326355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.crt ...
	I1128 04:36:34.406870 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.crt: {Name:mk1209372d2a8592aa0e2a5efc546d34e0748549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:34.407063 1326355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.key ...
	I1128 04:36:34.407080 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.key: {Name:mkd333f1ea52e151c040890bd5de633a37a5e911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:36:34.407164 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1128 04:36:34.407187 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1128 04:36:34.407204 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1128 04:36:34.407219 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1128 04:36:34.407231 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 04:36:34.407249 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 04:36:34.407261 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 04:36:34.407282 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 04:36:34.407335 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem (1338 bytes)
	W1128 04:36:34.407373 1326355 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415_empty.pem, impossibly tiny 0 bytes
	I1128 04:36:34.407386 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:36:34.407416 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem (1082 bytes)
	I1128 04:36:34.407444 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:36:34.407493 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem (1679 bytes)
	I1128 04:36:34.407543 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:36:34.407574 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem -> /usr/share/ca-certificates/1261415.pem
	I1128 04:36:34.407590 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> /usr/share/ca-certificates/12614152.pem
	I1128 04:36:34.407604 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:36:34.408214 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:36:34.438826 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 04:36:34.469520 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:36:34.500212 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:36:34.529849 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:36:34.559376 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:36:34.589045 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:36:34.618407 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1128 04:36:34.649915 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem --> /usr/share/ca-certificates/1261415.pem (1338 bytes)
	I1128 04:36:34.679913 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /usr/share/ca-certificates/12614152.pem (1708 bytes)
	I1128 04:36:34.709203 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:36:34.738650 1326355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:36:34.760168 1326355 ssh_runner.go:195] Run: openssl version
	I1128 04:36:34.768718 1326355 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1128 04:36:34.769234 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1261415.pem && ln -fs /usr/share/ca-certificates/1261415.pem /etc/ssl/certs/1261415.pem"
	I1128 04:36:34.781577 1326355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261415.pem
	I1128 04:36:34.786109 1326355 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 04:21 /usr/share/ca-certificates/1261415.pem
	I1128 04:36:34.786147 1326355 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 04:21 /usr/share/ca-certificates/1261415.pem
	I1128 04:36:34.786205 1326355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261415.pem
	I1128 04:36:34.794895 1326355 command_runner.go:130] > 51391683
	I1128 04:36:34.794985 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1261415.pem /etc/ssl/certs/51391683.0"
	I1128 04:36:34.806887 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12614152.pem && ln -fs /usr/share/ca-certificates/12614152.pem /etc/ssl/certs/12614152.pem"
	I1128 04:36:34.819250 1326355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12614152.pem
	I1128 04:36:34.823778 1326355 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 04:21 /usr/share/ca-certificates/12614152.pem
	I1128 04:36:34.824082 1326355 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 04:21 /usr/share/ca-certificates/12614152.pem
	I1128 04:36:34.824156 1326355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12614152.pem
	I1128 04:36:34.832462 1326355 command_runner.go:130] > 3ec20f2e
	I1128 04:36:34.832836 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12614152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:36:34.844201 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:36:34.855961 1326355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:36:34.860502 1326355 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:36:34.860755 1326355 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:36:34.860842 1326355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:36:34.869158 1326355 command_runner.go:130] > b5213941
	I1128 04:36:34.869600 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:36:34.881397 1326355 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:36:34.885708 1326355 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 04:36:34.885739 1326355 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 04:36:34.885777 1326355 kubeadm.go:404] StartCluster: {Name:multinode-448128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:36:34.885860 1326355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:36:34.885920 1326355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:36:34.936639 1326355 cri.go:89] found id: ""
	I1128 04:36:34.936737 1326355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:36:34.947068 1326355 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1128 04:36:34.947096 1326355 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1128 04:36:34.947105 1326355 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1128 04:36:34.947181 1326355 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:36:34.957665 1326355 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1128 04:36:34.957734 1326355 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:36:34.969170 1326355 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1128 04:36:34.969199 1326355 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1128 04:36:34.969209 1326355 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1128 04:36:34.969222 1326355 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:36:34.969372 1326355 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:36:34.969439 1326355 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1128 04:36:35.026852 1326355 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:36:35.026885 1326355 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1128 04:36:35.027121 1326355 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:36:35.027145 1326355 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 04:36:35.073396 1326355 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1128 04:36:35.073432 1326355 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1128 04:36:35.073495 1326355 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1128 04:36:35.073507 1326355 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1050-aws
	I1128 04:36:35.073539 1326355 kubeadm.go:322] OS: Linux
	I1128 04:36:35.073549 1326355 command_runner.go:130] > OS: Linux
	I1128 04:36:35.073591 1326355 kubeadm.go:322] CGROUPS_CPU: enabled
	I1128 04:36:35.073605 1326355 command_runner.go:130] > CGROUPS_CPU: enabled
	I1128 04:36:35.073650 1326355 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1128 04:36:35.073659 1326355 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1128 04:36:35.073706 1326355 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1128 04:36:35.073719 1326355 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1128 04:36:35.073775 1326355 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1128 04:36:35.073790 1326355 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1128 04:36:35.073846 1326355 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1128 04:36:35.073854 1326355 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1128 04:36:35.073901 1326355 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1128 04:36:35.073914 1326355 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1128 04:36:35.073956 1326355 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1128 04:36:35.073967 1326355 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1128 04:36:35.074012 1326355 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1128 04:36:35.074020 1326355 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1128 04:36:35.074063 1326355 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1128 04:36:35.074072 1326355 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1128 04:36:35.162183 1326355 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:36:35.162264 1326355 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:36:35.162431 1326355 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:36:35.162462 1326355 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:36:35.162596 1326355 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:36:35.162618 1326355 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:36:35.404069 1326355 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:36:35.408800 1326355 out.go:204]   - Generating certificates and keys ...
	I1128 04:36:35.404440 1326355 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:36:35.409092 1326355 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:36:35.409123 1326355 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1128 04:36:35.409231 1326355 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:36:35.409264 1326355 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1128 04:36:35.992293 1326355 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 04:36:35.992319 1326355 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 04:36:37.713352 1326355 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 04:36:37.713380 1326355 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1128 04:36:38.386613 1326355 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 04:36:38.386643 1326355 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1128 04:36:38.886716 1326355 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 04:36:38.886751 1326355 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1128 04:36:39.583807 1326355 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 04:36:39.583832 1326355 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1128 04:36:39.583977 1326355 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-448128] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1128 04:36:39.583984 1326355 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-448128] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1128 04:36:40.105755 1326355 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 04:36:40.105785 1326355 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1128 04:36:40.105921 1326355 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-448128] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1128 04:36:40.105931 1326355 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-448128] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1128 04:36:40.394757 1326355 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 04:36:40.394793 1326355 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 04:36:40.621057 1326355 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 04:36:40.621099 1326355 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 04:36:41.017755 1326355 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 04:36:41.017803 1326355 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1128 04:36:41.018063 1326355 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:36:41.018080 1326355 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:36:41.597109 1326355 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:36:41.597146 1326355 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:36:41.837555 1326355 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:36:41.837581 1326355 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:36:42.415881 1326355 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:36:42.415908 1326355 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:36:43.304005 1326355 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:36:43.304031 1326355 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:36:43.304912 1326355 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:36:43.304928 1326355 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:36:43.309026 1326355 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:36:43.311305 1326355 out.go:204]   - Booting up control plane ...
	I1128 04:36:43.309103 1326355 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:36:43.311400 1326355 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:36:43.311417 1326355 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:36:43.311525 1326355 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:36:43.311537 1326355 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:36:43.312389 1326355 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:36:43.312408 1326355 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:36:43.323694 1326355 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:36:43.323723 1326355 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:36:43.324698 1326355 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:36:43.324719 1326355 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:36:43.324991 1326355 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:36:43.325023 1326355 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 04:36:43.434130 1326355 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:36:43.434169 1326355 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:36:50.936840 1326355 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502741 seconds
	I1128 04:36:50.936868 1326355 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502741 seconds
	I1128 04:36:50.936967 1326355 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:36:50.936973 1326355 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:36:50.952582 1326355 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:36:50.952606 1326355 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:36:51.478573 1326355 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:36:51.478604 1326355 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:36:51.478782 1326355 kubeadm.go:322] [mark-control-plane] Marking the node multinode-448128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:36:51.478792 1326355 command_runner.go:130] > [mark-control-plane] Marking the node multinode-448128 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:36:51.990983 1326355 kubeadm.go:322] [bootstrap-token] Using token: uwvaut.p5m9v5jehjmmscsp
	I1128 04:36:51.993070 1326355 out.go:204]   - Configuring RBAC rules ...
	I1128 04:36:51.991101 1326355 command_runner.go:130] > [bootstrap-token] Using token: uwvaut.p5m9v5jehjmmscsp
	I1128 04:36:51.993208 1326355 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:36:51.993234 1326355 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:36:52.005910 1326355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:36:52.005940 1326355 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:36:52.037428 1326355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:36:52.037454 1326355 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:36:52.049957 1326355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:36:52.049984 1326355 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:36:52.055567 1326355 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:36:52.055596 1326355 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:36:52.059815 1326355 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:36:52.059843 1326355 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:36:52.075833 1326355 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:36:52.075859 1326355 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:36:52.315364 1326355 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:36:52.315390 1326355 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1128 04:36:52.443990 1326355 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:36:52.444018 1326355 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1128 04:36:52.445243 1326355 kubeadm.go:322] 
	I1128 04:36:52.445315 1326355 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:36:52.445328 1326355 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1128 04:36:52.445338 1326355 kubeadm.go:322] 
	I1128 04:36:52.445411 1326355 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:36:52.445420 1326355 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1128 04:36:52.445425 1326355 kubeadm.go:322] 
	I1128 04:36:52.445449 1326355 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:36:52.445458 1326355 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1128 04:36:52.445538 1326355 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:36:52.445563 1326355 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:36:52.445619 1326355 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:36:52.445628 1326355 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:36:52.445632 1326355 kubeadm.go:322] 
	I1128 04:36:52.445683 1326355 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:36:52.445692 1326355 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1128 04:36:52.445696 1326355 kubeadm.go:322] 
	I1128 04:36:52.445741 1326355 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:36:52.445748 1326355 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:36:52.445752 1326355 kubeadm.go:322] 
	I1128 04:36:52.445801 1326355 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:36:52.445809 1326355 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1128 04:36:52.445879 1326355 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:36:52.445887 1326355 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:36:52.445951 1326355 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:36:52.445960 1326355 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:36:52.445964 1326355 kubeadm.go:322] 
	I1128 04:36:52.446042 1326355 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:36:52.446053 1326355 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:36:52.446125 1326355 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:36:52.446133 1326355 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1128 04:36:52.446137 1326355 kubeadm.go:322] 
	I1128 04:36:52.446216 1326355 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token uwvaut.p5m9v5jehjmmscsp \
	I1128 04:36:52.446226 1326355 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token uwvaut.p5m9v5jehjmmscsp \
	I1128 04:36:52.446324 1326355 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c \
	I1128 04:36:52.446332 1326355 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c \
	I1128 04:36:52.446351 1326355 kubeadm.go:322] 	--control-plane 
	I1128 04:36:52.446360 1326355 command_runner.go:130] > 	--control-plane 
	I1128 04:36:52.446364 1326355 kubeadm.go:322] 
	I1128 04:36:52.446444 1326355 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:36:52.446452 1326355 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:36:52.446457 1326355 kubeadm.go:322] 
	I1128 04:36:52.446534 1326355 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token uwvaut.p5m9v5jehjmmscsp \
	I1128 04:36:52.446542 1326355 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uwvaut.p5m9v5jehjmmscsp \
	I1128 04:36:52.446637 1326355 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c 
	I1128 04:36:52.446645 1326355 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c 
	I1128 04:36:52.450829 1326355 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1128 04:36:52.450857 1326355 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1128 04:36:52.450965 1326355 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:36:52.450980 1326355 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:36:52.451009 1326355 cni.go:84] Creating CNI manager for ""
	I1128 04:36:52.451020 1326355 cni.go:136] 1 nodes found, recommending kindnet
	I1128 04:36:52.453570 1326355 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 04:36:52.455392 1326355 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:36:52.468498 1326355 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 04:36:52.468526 1326355 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1128 04:36:52.468550 1326355 command_runner.go:130] > Device: 3ah/58d	Inode: 5452979     Links: 1
	I1128 04:36:52.468559 1326355 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 04:36:52.468570 1326355 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1128 04:36:52.468580 1326355 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1128 04:36:52.468591 1326355 command_runner.go:130] > Change: 2023-11-28 04:13:24.893843648 +0000
	I1128 04:36:52.468597 1326355 command_runner.go:130] >  Birth: 2023-11-28 04:13:24.849843889 +0000
	I1128 04:36:52.473154 1326355 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 04:36:52.473178 1326355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:36:52.532882 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:36:53.347924 1326355 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1128 04:36:53.357495 1326355 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1128 04:36:53.366740 1326355 command_runner.go:130] > serviceaccount/kindnet created
	I1128 04:36:53.379010 1326355 command_runner.go:130] > daemonset.apps/kindnet created
	I1128 04:36:53.385226 1326355 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:36:53.385351 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:53.385417 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=multinode-448128 minikube.k8s.io/updated_at=2023_11_28T04_36_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:53.543700 1326355 command_runner.go:130] > node/multinode-448128 labeled
	I1128 04:36:53.547422 1326355 command_runner.go:130] > -16
	I1128 04:36:53.547465 1326355 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1128 04:36:53.547502 1326355 ops.go:34] apiserver oom_adj: -16
	I1128 04:36:53.547583 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:53.703161 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:53.703251 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:53.794474 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:54.298478 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:54.395534 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:54.799093 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:54.886298 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:55.298882 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:55.393462 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:55.798427 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:55.890630 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:56.298190 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:56.388824 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:56.798264 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:56.887455 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:57.298260 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:57.402876 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:57.798345 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:57.891687 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:58.298193 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:58.394822 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:58.798416 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:58.888059 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:59.298256 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:59.391576 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:36:59.799181 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:36:59.895605 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:00.298385 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:00.430864 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:00.798234 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:00.892086 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:01.298805 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:01.475485 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:01.799018 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:01.895288 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:02.298551 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:02.395042 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:02.798224 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:02.889599 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:03.298934 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:03.396709 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:03.798211 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:03.898825 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:04.298194 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:04.409054 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:04.798196 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:04.945223 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:05.298722 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:05.415587 1326355 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 04:37:05.798869 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:37:05.924808 1326355 command_runner.go:130] > NAME      SECRETS   AGE
	I1128 04:37:05.924827 1326355 command_runner.go:130] > default   0         0s
	I1128 04:37:05.925180 1326355 kubeadm.go:1081] duration metric: took 12.5398709s to wait for elevateKubeSystemPrivileges.
	I1128 04:37:05.925205 1326355 kubeadm.go:406] StartCluster complete in 31.039430939s
	I1128 04:37:05.925226 1326355 settings.go:142] acquiring lock: {Name:mk51bec1305a61d1e5f21881e1d4b01dfafff7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:37:05.925306 1326355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:37:05.926084 1326355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/kubeconfig: {Name:mkdd24900acdf0a7a11c60f4e6d81c9963f4153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:37:05.926783 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:37:05.927060 1326355 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:37:05.927264 1326355 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:37:05.927373 1326355 addons.go:69] Setting storage-provisioner=true in profile "multinode-448128"
	I1128 04:37:05.927406 1326355 addons.go:231] Setting addon storage-provisioner=true in "multinode-448128"
	I1128 04:37:05.927453 1326355 host.go:66] Checking if "multinode-448128" exists ...
	I1128 04:37:05.927952 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:37:05.928151 1326355 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:37:05.928488 1326355 kapi.go:59] client config for multinode-448128: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:37:05.929864 1326355 addons.go:69] Setting default-storageclass=true in profile "multinode-448128"
	I1128 04:37:05.929898 1326355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-448128"
	I1128 04:37:05.930225 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:37:05.930426 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 04:37:05.930444 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:05.930457 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:05.930465 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:05.930688 1326355 cert_rotation.go:137] Starting client certificate rotation controller
	I1128 04:37:05.981171 1326355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:37:05.987192 1326355 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:37:05.987226 1326355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:37:05.987303 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:37:05.989771 1326355 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:37:05.990106 1326355 kapi.go:59] client config for multinode-448128: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:37:05.990412 1326355 addons.go:231] Setting addon default-storageclass=true in "multinode-448128"
	I1128 04:37:05.990448 1326355 host.go:66] Checking if "multinode-448128" exists ...
	I1128 04:37:05.991001 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:37:06.001129 1326355 round_trippers.go:574] Response Status: 200 OK in 70 milliseconds
	I1128 04:37:06.001158 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:06.001167 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:05 GMT
	I1128 04:37:06.001175 1326355 round_trippers.go:580]     Audit-Id: 0273ba42-f056-4b11-a357-e212ca2d1b0b
	I1128 04:37:06.001183 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:06.001190 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:06.001196 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:06.001209 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:06.001219 1326355 round_trippers.go:580]     Content-Length: 291
	I1128 04:37:06.001687 1326355 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0734a789-b557-4140-8c57-08339bccd505","resourceVersion":"318","creationTimestamp":"2023-11-28T04:36:52Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 04:37:06.002392 1326355 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0734a789-b557-4140-8c57-08339bccd505","resourceVersion":"318","creationTimestamp":"2023-11-28T04:36:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 04:37:06.002470 1326355 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 04:37:06.002478 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:06.002487 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:06.002497 1326355 round_trippers.go:473]     Content-Type: application/json
	I1128 04:37:06.002504 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:06.029823 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:37:06.058536 1326355 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:37:06.058562 1326355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:37:06.058633 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:37:06.076583 1326355 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I1128 04:37:06.076609 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:06.076619 1326355 round_trippers.go:580]     Audit-Id: 268c547f-a9a2-4e29-945f-05488aecc677
	I1128 04:37:06.076625 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:06.076632 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:06.076638 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:06.076645 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:06.080136 1326355 round_trippers.go:580]     Content-Length: 291
	I1128 04:37:06.080180 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:06 GMT
	I1128 04:37:06.080215 1326355 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0734a789-b557-4140-8c57-08339bccd505","resourceVersion":"341","creationTimestamp":"2023-11-28T04:36:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 04:37:06.080379 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 04:37:06.080395 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:06.080404 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:06.080412 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:06.102177 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:37:06.134356 1326355 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I1128 04:37:06.134381 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:06.134390 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:06.134396 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:06.134403 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:06.134409 1326355 round_trippers.go:580]     Content-Length: 291
	I1128 04:37:06.134415 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:06 GMT
	I1128 04:37:06.134421 1326355 round_trippers.go:580]     Audit-Id: 039ede29-0924-4c80-b8f8-50fa57e89bfe
	I1128 04:37:06.134428 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:06.135471 1326355 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0734a789-b557-4140-8c57-08339bccd505","resourceVersion":"341","creationTimestamp":"2023-11-28T04:36:52Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 04:37:06.135592 1326355 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-448128" context rescaled to 1 replicas
	I1128 04:37:06.135625 1326355 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:37:06.139282 1326355 out.go:177] * Verifying Kubernetes components...
	I1128 04:37:06.141367 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:37:06.221340 1326355 command_runner.go:130] > apiVersion: v1
	I1128 04:37:06.221359 1326355 command_runner.go:130] > data:
	I1128 04:37:06.221365 1326355 command_runner.go:130] >   Corefile: |
	I1128 04:37:06.221370 1326355 command_runner.go:130] >     .:53 {
	I1128 04:37:06.221375 1326355 command_runner.go:130] >         errors
	I1128 04:37:06.221381 1326355 command_runner.go:130] >         health {
	I1128 04:37:06.221386 1326355 command_runner.go:130] >            lameduck 5s
	I1128 04:37:06.221390 1326355 command_runner.go:130] >         }
	I1128 04:37:06.221395 1326355 command_runner.go:130] >         ready
	I1128 04:37:06.221403 1326355 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1128 04:37:06.221408 1326355 command_runner.go:130] >            pods insecure
	I1128 04:37:06.221414 1326355 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1128 04:37:06.221419 1326355 command_runner.go:130] >            ttl 30
	I1128 04:37:06.221424 1326355 command_runner.go:130] >         }
	I1128 04:37:06.221429 1326355 command_runner.go:130] >         prometheus :9153
	I1128 04:37:06.221435 1326355 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1128 04:37:06.221440 1326355 command_runner.go:130] >            max_concurrent 1000
	I1128 04:37:06.221448 1326355 command_runner.go:130] >         }
	I1128 04:37:06.221453 1326355 command_runner.go:130] >         cache 30
	I1128 04:37:06.221458 1326355 command_runner.go:130] >         loop
	I1128 04:37:06.221463 1326355 command_runner.go:130] >         reload
	I1128 04:37:06.221467 1326355 command_runner.go:130] >         loadbalance
	I1128 04:37:06.221472 1326355 command_runner.go:130] >     }
	I1128 04:37:06.221477 1326355 command_runner.go:130] > kind: ConfigMap
	I1128 04:37:06.221482 1326355 command_runner.go:130] > metadata:
	I1128 04:37:06.221491 1326355 command_runner.go:130] >   creationTimestamp: "2023-11-28T04:36:52Z"
	I1128 04:37:06.221496 1326355 command_runner.go:130] >   name: coredns
	I1128 04:37:06.221501 1326355 command_runner.go:130] >   namespace: kube-system
	I1128 04:37:06.221506 1326355 command_runner.go:130] >   resourceVersion: "219"
	I1128 04:37:06.221512 1326355 command_runner.go:130] >   uid: db2cd5c9-b7fa-4580-9b92-873b141ceb6c
	I1128 04:37:06.221642 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:37:06.222062 1326355 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:37:06.222319 1326355 kapi.go:59] client config for multinode-448128: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:37:06.222599 1326355 node_ready.go:35] waiting up to 6m0s for node "multinode-448128" to be "Ready" ...
	I1128 04:37:06.222684 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:06.222690 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:06.222699 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:06.222706 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:06.226143 1326355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:37:06.251737 1326355 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1128 04:37:06.251810 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:06.251832 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:06.251854 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:06.251890 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:06 GMT
	I1128 04:37:06.251914 1326355 round_trippers.go:580]     Audit-Id: 71db6f45-9eb1-4d1f-b82d-dd25f8c6e739
	I1128 04:37:06.251936 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:06.251974 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:06.256534 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:06.257422 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:06.257474 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:06.257498 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:06.257522 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:06.279259 1326355 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1128 04:37:06.279331 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:06.279354 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:06.279376 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:06.279410 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:06 GMT
	I1128 04:37:06.279434 1326355 round_trippers.go:580]     Audit-Id: 6a9cf2a9-1567-447e-bb4d-fee2536b591b
	I1128 04:37:06.279455 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:06.279490 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:06.280261 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:06.337976 1326355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:37:06.781612 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:06.781682 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:06.781720 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:06.781747 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:06.804050 1326355 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1128 04:37:06.804118 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:06.804141 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:06.804162 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:06.804211 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:06.804234 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:06.804255 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:06 GMT
	I1128 04:37:06.804291 1326355 round_trippers.go:580]     Audit-Id: 717fb1cd-6e8f-4193-a7a1-ec783b51b0e6
	I1128 04:37:06.812111 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:06.946980 1326355 command_runner.go:130] > configmap/coredns replaced
	I1128 04:37:06.952297 1326355 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1128 04:37:07.133658 1326355 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1128 04:37:07.141603 1326355 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1128 04:37:07.151002 1326355 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1128 04:37:07.168971 1326355 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1128 04:37:07.179495 1326355 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1128 04:37:07.193067 1326355 command_runner.go:130] > pod/storage-provisioner created
	I1128 04:37:07.196999 1326355 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1128 04:37:07.197197 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1128 04:37:07.197224 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:07.197255 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:07.197276 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:07.208407 1326355 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1128 04:37:07.208481 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:07.208518 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:07.208545 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:07.208568 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:07.208603 1326355 round_trippers.go:580]     Content-Length: 1273
	I1128 04:37:07.208635 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:07 GMT
	I1128 04:37:07.208716 1326355 round_trippers.go:580]     Audit-Id: 6d773828-9884-43cf-b60f-22a425db96b2
	I1128 04:37:07.208743 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:07.208860 1326355 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"standard","uid":"21ef83f0-9988-4978-a17b-8e702587e90a","resourceVersion":"360","creationTimestamp":"2023-11-28T04:37:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-28T04:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1128 04:37:07.209345 1326355 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"21ef83f0-9988-4978-a17b-8e702587e90a","resourceVersion":"360","creationTimestamp":"2023-11-28T04:37:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-28T04:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1128 04:37:07.209431 1326355 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1128 04:37:07.209466 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:07.209491 1326355 round_trippers.go:473]     Content-Type: application/json
	I1128 04:37:07.209514 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:07.209552 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:07.213067 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:07.213126 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:07.213157 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:07.213177 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:07.213212 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:07.213237 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:07.213261 1326355 round_trippers.go:580]     Content-Length: 1220
	I1128 04:37:07.213295 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:07 GMT
	I1128 04:37:07.213319 1326355 round_trippers.go:580]     Audit-Id: bb327711-fb67-4ee3-9670-83f97a13f8bf
	I1128 04:37:07.213381 1326355 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"21ef83f0-9988-4978-a17b-8e702587e90a","resourceVersion":"360","creationTimestamp":"2023-11-28T04:37:06Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-28T04:37:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1128 04:37:07.217027 1326355 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 04:37:07.218977 1326355 addons.go:502] enable addons completed in 1.29170887s: enabled=[storage-provisioner default-storageclass]
	I1128 04:37:07.281058 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:07.281081 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:07.281091 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:07.281098 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:07.283510 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:07.283580 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:07.283603 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:07.283626 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:07.283660 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:07.283687 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:07.283711 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:07 GMT
	I1128 04:37:07.283747 1326355 round_trippers.go:580]     Audit-Id: b96154fc-131b-4170-8d5a-80eb51f2844b
	I1128 04:37:07.284335 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:07.781896 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:07.781919 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:07.781929 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:07.781936 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:07.784420 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:07.784493 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:07.784554 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:07 GMT
	I1128 04:37:07.784580 1326355 round_trippers.go:580]     Audit-Id: fa9888e1-26ef-43ad-a4d5-baa3673b81ef
	I1128 04:37:07.784593 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:07.784600 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:07.784610 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:07.784617 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:07.784742 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:08.281127 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:08.281153 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:08.281163 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:08.281170 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:08.283906 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:08.283988 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:08.284016 1326355 round_trippers.go:580]     Audit-Id: 5ea3550c-ab9f-46d6-9f9b-f9b6a85300b4
	I1128 04:37:08.284060 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:08.284075 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:08.284083 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:08.284089 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:08.284095 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:08 GMT
	I1128 04:37:08.284305 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:08.284758 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:08.781221 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:08.781244 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:08.781254 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:08.781261 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:08.783912 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:08.783938 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:08.783948 1326355 round_trippers.go:580]     Audit-Id: b6d4e293-d3fa-4ea1-9838-12e23e5419ab
	I1128 04:37:08.783955 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:08.783961 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:08.783968 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:08.783976 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:08.783983 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:08 GMT
	I1128 04:37:08.784677 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:09.281063 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:09.281089 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:09.281099 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:09.281107 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:09.283519 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:09.283539 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:09.283547 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:09.283554 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:09.283560 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:09 GMT
	I1128 04:37:09.283567 1326355 round_trippers.go:580]     Audit-Id: c1a5e1fe-d632-4e60-a206-f73c3e6f3840
	I1128 04:37:09.283573 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:09.283581 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:09.283823 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:09.781964 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:09.781986 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:09.781995 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:09.782003 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:09.784495 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:09.784516 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:09.784524 1326355 round_trippers.go:580]     Audit-Id: 31b3311a-6567-427b-8a88-1a6045bf949f
	I1128 04:37:09.784531 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:09.784537 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:09.784543 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:09.784550 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:09.784556 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:09 GMT
	I1128 04:37:09.784732 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:10.281259 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:10.281284 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:10.281293 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:10.281300 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:10.283675 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:10.283698 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:10.283707 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:10.283713 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:10.283720 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:10 GMT
	I1128 04:37:10.283751 1326355 round_trippers.go:580]     Audit-Id: 1bf978d0-d241-4c7c-b73e-0d992eaaa352
	I1128 04:37:10.283765 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:10.283774 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:10.283982 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:10.781073 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:10.781099 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:10.781110 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:10.781117 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:10.783579 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:10.783600 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:10.783608 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:10.783615 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:10.783621 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:10.783628 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:10.783634 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:10 GMT
	I1128 04:37:10.783640 1326355 round_trippers.go:580]     Audit-Id: 37f2e74b-180b-4185-9e64-5c52a9921c63
	I1128 04:37:10.783760 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:10.784162 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:11.281912 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:11.281937 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:11.281948 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:11.281955 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:11.284542 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:11.284567 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:11.284577 1326355 round_trippers.go:580]     Audit-Id: aeae26ab-8c63-46df-8064-83e8f3fb6e81
	I1128 04:37:11.284584 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:11.284590 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:11.284596 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:11.284603 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:11.284609 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:11 GMT
	I1128 04:37:11.284878 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:11.781269 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:11.781297 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:11.781308 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:11.781315 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:11.784011 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:11.784037 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:11.784046 1326355 round_trippers.go:580]     Audit-Id: 7ebb30b6-1c64-485c-9017-2b96e147cd13
	I1128 04:37:11.784053 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:11.784062 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:11.784070 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:11.784076 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:11.784083 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:11 GMT
	I1128 04:37:11.784205 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:12.281102 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:12.281127 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:12.281140 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:12.281148 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:12.283700 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:12.283721 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:12.283729 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:12.283737 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:12.283743 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:12.283750 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:12 GMT
	I1128 04:37:12.283756 1326355 round_trippers.go:580]     Audit-Id: 2eb8d769-d918-478a-9fa8-63d4f1f77d5c
	I1128 04:37:12.283762 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:12.283882 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:12.781077 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:12.781103 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:12.781112 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:12.781119 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:12.783607 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:12.783632 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:12.783640 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:12.783647 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:12.783653 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:12 GMT
	I1128 04:37:12.783660 1326355 round_trippers.go:580]     Audit-Id: 132e9acd-d8c4-432c-8a5e-957f1478a5c1
	I1128 04:37:12.783666 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:12.783672 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:12.783855 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:12.784256 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:13.281987 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:13.282011 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:13.282021 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:13.282029 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:13.284544 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:13.284568 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:13.284577 1326355 round_trippers.go:580]     Audit-Id: 66fd6f2c-b5ce-40d5-9be3-68690d47357e
	I1128 04:37:13.284583 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:13.284589 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:13.284595 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:13.284601 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:13.284607 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:13 GMT
	I1128 04:37:13.284819 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:13.781570 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:13.781593 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:13.781604 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:13.781611 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:13.784052 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:13.784078 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:13.784088 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:13 GMT
	I1128 04:37:13.784095 1326355 round_trippers.go:580]     Audit-Id: 47d6e305-fa70-4912-855a-e1f2dbd7e683
	I1128 04:37:13.784101 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:13.784108 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:13.784114 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:13.784123 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:13.784366 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:14.281085 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:14.281159 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:14.281169 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:14.281176 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:14.283662 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:14.283684 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:14.283693 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:14.283701 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:14 GMT
	I1128 04:37:14.283707 1326355 round_trippers.go:580]     Audit-Id: 9e723e9e-6b36-4a20-be0a-2ed0fa1ac8f8
	I1128 04:37:14.283713 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:14.283719 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:14.283725 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:14.284163 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:14.781892 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:14.781915 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:14.781924 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:14.781931 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:14.784511 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:14.784534 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:14.784545 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:14.784552 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:14.784558 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:14 GMT
	I1128 04:37:14.784565 1326355 round_trippers.go:580]     Audit-Id: ea479f84-73c1-4661-886d-90ee71104981
	I1128 04:37:14.784571 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:14.784577 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:14.784699 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:14.785097 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:15.281373 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:15.281401 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:15.281410 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:15.281418 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:15.283903 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:15.283927 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:15.283936 1326355 round_trippers.go:580]     Audit-Id: 757791b8-fa9f-47b6-add4-8ec98d7b1b73
	I1128 04:37:15.283942 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:15.283948 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:15.283954 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:15.283961 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:15.283973 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:15 GMT
	I1128 04:37:15.284130 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:15.782017 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:15.782043 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:15.782053 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:15.782061 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:15.784557 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:15.784579 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:15.784587 1326355 round_trippers.go:580]     Audit-Id: 299dff68-b4e7-424b-9bbc-88d5368f1252
	I1128 04:37:15.784594 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:15.784601 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:15.784607 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:15.784613 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:15.784619 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:15 GMT
	I1128 04:37:15.784761 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:16.281871 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:16.281903 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:16.281917 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:16.281925 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:16.284897 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:16.284934 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:16.284947 1326355 round_trippers.go:580]     Audit-Id: 8b3bfc4a-e013-4b64-93c1-c833cbc8c190
	I1128 04:37:16.284954 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:16.284963 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:16.284969 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:16.284988 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:16.284995 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:16 GMT
	I1128 04:37:16.286072 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:16.781620 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:16.781645 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:16.781655 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:16.781662 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:16.784183 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:16.784204 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:16.784213 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:16.784221 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:16 GMT
	I1128 04:37:16.784227 1326355 round_trippers.go:580]     Audit-Id: 77f340b7-5f80-4d24-8541-d7b813948943
	I1128 04:37:16.784234 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:16.784240 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:16.784246 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:16.784413 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:17.280997 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:17.281026 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:17.281045 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:17.281055 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:17.283695 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:17.283719 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:17.283728 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:17.283735 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:17.283741 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:17.283748 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:17 GMT
	I1128 04:37:17.283754 1326355 round_trippers.go:580]     Audit-Id: 7a562daa-dad7-4f2c-9175-a720dba9f7d0
	I1128 04:37:17.283761 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:17.284173 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:17.284589 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:17.781310 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:17.781334 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:17.781344 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:17.781352 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:17.783853 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:17.783873 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:17.783882 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:17 GMT
	I1128 04:37:17.783889 1326355 round_trippers.go:580]     Audit-Id: 5bf46b06-61d1-4b91-a9d8-a0b9c54ceb36
	I1128 04:37:17.783895 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:17.783902 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:17.783908 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:17.783914 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:17.784037 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:18.281090 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:18.281115 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:18.281125 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:18.281133 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:18.283711 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:18.283733 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:18.283743 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:18.283749 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:18.283756 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:18.283762 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:18.283768 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:18 GMT
	I1128 04:37:18.283774 1326355 round_trippers.go:580]     Audit-Id: d4656a72-a1b0-4d1d-a420-90db742227a1
	I1128 04:37:18.283932 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:18.780984 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:18.781009 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:18.781019 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:18.781029 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:18.783499 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:18.783521 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:18.783530 1326355 round_trippers.go:580]     Audit-Id: 36d0b566-9fb7-46ba-a899-bfa2dbf89713
	I1128 04:37:18.783536 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:18.783542 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:18.783548 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:18.783554 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:18.783561 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:18 GMT
	I1128 04:37:18.783675 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:19.281876 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:19.281902 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:19.281911 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:19.281919 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:19.284498 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:19.284520 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:19.284529 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:19.284535 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:19.284542 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:19 GMT
	I1128 04:37:19.284548 1326355 round_trippers.go:580]     Audit-Id: 567db45a-fc5d-4e18-80af-10fbe77c8b57
	I1128 04:37:19.284555 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:19.284561 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:19.285036 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:19.285445 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:19.781706 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:19.781733 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:19.781744 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:19.781751 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:19.784324 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:19.784344 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:19.784353 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:19.784359 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:19.784366 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:19 GMT
	I1128 04:37:19.784372 1326355 round_trippers.go:580]     Audit-Id: 4ee3c87c-5be8-4552-befe-ef2eed8ee74c
	I1128 04:37:19.784378 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:19.784384 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:19.784649 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:20.281371 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:20.281396 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:20.281405 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:20.281413 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:20.284047 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:20.284070 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:20.284079 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:20.284085 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:20.284092 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:20.284099 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:20 GMT
	I1128 04:37:20.284105 1326355 round_trippers.go:580]     Audit-Id: b50c7b4f-676c-4120-a6bb-8e9f90d6b5ea
	I1128 04:37:20.284112 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:20.284243 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:20.781098 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:20.781130 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:20.781140 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:20.781147 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:20.783787 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:20.783818 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:20.783828 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:20 GMT
	I1128 04:37:20.783835 1326355 round_trippers.go:580]     Audit-Id: 459e1222-075e-4224-8140-f4999d425fc0
	I1128 04:37:20.783842 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:20.783852 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:20.783859 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:20.783870 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:20.783971 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:21.281299 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:21.281327 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:21.281337 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:21.281344 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:21.284001 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:21.284030 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:21.284044 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:21.284051 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:21.284058 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:21 GMT
	I1128 04:37:21.284068 1326355 round_trippers.go:580]     Audit-Id: 432fbde0-3ed9-495e-a8b7-f85d2c58ee27
	I1128 04:37:21.284074 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:21.284084 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:21.284395 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:21.781078 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:21.781102 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:21.781111 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:21.781119 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:21.783665 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:21.783687 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:21.783696 1326355 round_trippers.go:580]     Audit-Id: d2ae5bc0-db4c-42f8-a6c4-be413609adac
	I1128 04:37:21.783703 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:21.783709 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:21.783715 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:21.783721 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:21.783727 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:21 GMT
	I1128 04:37:21.783844 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:21.784247 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:22.281947 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:22.281972 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:22.281982 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:22.281990 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:22.284630 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:22.284680 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:22.284690 1326355 round_trippers.go:580]     Audit-Id: 1f9bbccc-0382-4f31-a408-7a3b220de3ed
	I1128 04:37:22.284697 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:22.284704 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:22.284711 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:22.284718 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:22.284731 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:22 GMT
	I1128 04:37:22.284842 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:22.782025 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:22.782049 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:22.782059 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:22.782067 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:22.784697 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:22.784722 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:22.784731 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:22.784738 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:22.784744 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:22 GMT
	I1128 04:37:22.784751 1326355 round_trippers.go:580]     Audit-Id: 8b162d03-2f9c-4539-8003-cff65c19fa39
	I1128 04:37:22.784757 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:22.784763 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:22.785090 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:23.281909 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:23.281937 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:23.281946 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:23.281954 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:23.284398 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:23.284426 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:23.284435 1326355 round_trippers.go:580]     Audit-Id: 98e6c86c-17b1-4a61-9712-7a1045555051
	I1128 04:37:23.284442 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:23.284448 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:23.284455 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:23.284461 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:23.284468 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:23 GMT
	I1128 04:37:23.284594 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:23.781754 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:23.781776 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:23.781792 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:23.781799 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:23.785147 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:23.785172 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:23.785181 1326355 round_trippers.go:580]     Audit-Id: 9db1ea45-f6bb-4a3c-b53d-15f19d353495
	I1128 04:37:23.785188 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:23.785194 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:23.785200 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:23.785206 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:23.785213 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:23 GMT
	I1128 04:37:23.785332 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:23.785757 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:24.281213 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:24.281240 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:24.281250 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:24.281258 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:24.284094 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:24.284121 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:24.284131 1326355 round_trippers.go:580]     Audit-Id: 9038e5a4-45f4-4f42-af6d-55b577209afe
	I1128 04:37:24.284137 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:24.284144 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:24.284150 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:24.284156 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:24.284163 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:24 GMT
	I1128 04:37:24.284285 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:24.781339 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:24.781364 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:24.781373 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:24.781381 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:24.783759 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:24.783786 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:24.783795 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:24.783802 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:24 GMT
	I1128 04:37:24.783808 1326355 round_trippers.go:580]     Audit-Id: 660e29ee-9a48-43e1-b0af-1c7161865748
	I1128 04:37:24.783814 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:24.783820 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:24.783826 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:24.783930 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:25.281648 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:25.281676 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:25.281687 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:25.281694 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:25.284251 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:25.284281 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:25.284292 1326355 round_trippers.go:580]     Audit-Id: abdf7401-1929-4198-a9d5-b9237b5bc0be
	I1128 04:37:25.284299 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:25.284305 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:25.284312 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:25.284319 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:25.284327 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:25 GMT
	I1128 04:37:25.284465 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:25.781110 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:25.781137 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:25.781148 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:25.781156 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:25.783638 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:25.783661 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:25.783670 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:25 GMT
	I1128 04:37:25.783677 1326355 round_trippers.go:580]     Audit-Id: 8e166083-ce54-483f-946e-db3b042c77ba
	I1128 04:37:25.783685 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:25.783691 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:25.783697 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:25.783704 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:25.783807 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:26.281721 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:26.281747 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:26.281757 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:26.281764 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:26.284770 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:26.284799 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:26.284808 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:26.284815 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:26.284822 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:26 GMT
	I1128 04:37:26.284829 1326355 round_trippers.go:580]     Audit-Id: 2f33e163-9849-4f67-9dc4-b1abcab2a660
	I1128 04:37:26.284835 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:26.284841 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:26.284976 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:26.285394 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:26.781993 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:26.782016 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:26.782025 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:26.782032 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:26.784521 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:26.784543 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:26.784552 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:26.784563 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:26.784569 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:26.784578 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:26 GMT
	I1128 04:37:26.784584 1326355 round_trippers.go:580]     Audit-Id: 69d3199d-dc72-439e-9b7a-12d0b793097e
	I1128 04:37:26.784590 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:26.784781 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:27.281394 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:27.281422 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:27.281432 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:27.281439 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:27.283879 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:27.283899 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:27.283908 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:27.283914 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:27.283920 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:27.283926 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:27 GMT
	I1128 04:37:27.283933 1326355 round_trippers.go:580]     Audit-Id: 5d549d96-e4c5-48e6-9c31-3dbee91f9a1f
	I1128 04:37:27.283940 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:27.284070 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:27.781215 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:27.781241 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:27.781251 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:27.781259 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:27.784019 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:27.784041 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:27.784049 1326355 round_trippers.go:580]     Audit-Id: bc529050-6da8-44e3-b9dd-247cbe3cde1d
	I1128 04:37:27.784055 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:27.784063 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:27.784069 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:27.784075 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:27.784082 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:27 GMT
	I1128 04:37:27.784199 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:28.281553 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:28.281579 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:28.281588 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:28.281595 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:28.284059 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:28.284089 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:28.284099 1326355 round_trippers.go:580]     Audit-Id: 29064f35-5b8d-4f4c-a460-5c329e984781
	I1128 04:37:28.284106 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:28.284113 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:28.284120 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:28.284127 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:28.284133 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:28 GMT
	I1128 04:37:28.284266 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:28.781059 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:28.781083 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:28.781093 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:28.781100 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:28.783738 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:28.783765 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:28.783774 1326355 round_trippers.go:580]     Audit-Id: 7c5ff252-cb45-459c-aa97-ef20e1964f23
	I1128 04:37:28.783781 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:28.783787 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:28.783793 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:28.783800 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:28.783806 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:28 GMT
	I1128 04:37:28.783926 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:28.784330 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:29.280998 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:29.281031 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:29.281042 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:29.281050 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:29.283693 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:29.283719 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:29.283728 1326355 round_trippers.go:580]     Audit-Id: 4fe24e3a-a62e-44f1-89a5-4d537f6c4ebd
	I1128 04:37:29.283735 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:29.283741 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:29.283747 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:29.283754 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:29.283761 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:29 GMT
	I1128 04:37:29.283891 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:29.780990 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:29.781014 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:29.781024 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:29.781031 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:29.783757 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:29.783778 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:29.783786 1326355 round_trippers.go:580]     Audit-Id: 5dfaae41-7522-48a8-8269-e18638f52274
	I1128 04:37:29.783799 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:29.783808 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:29.783814 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:29.783820 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:29.783827 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:29 GMT
	I1128 04:37:29.783945 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:30.281550 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:30.281580 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:30.281589 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:30.281596 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:30.284325 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:30.284346 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:30.284356 1326355 round_trippers.go:580]     Audit-Id: e67cca7e-ee10-4c87-9cbb-ef209a4ced1f
	I1128 04:37:30.284362 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:30.284368 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:30.284375 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:30.284381 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:30.284387 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:30 GMT
	I1128 04:37:30.284518 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:30.781399 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:30.781426 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:30.781437 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:30.781444 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:30.785453 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:30.785489 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:30.785499 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:30.785506 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:30.785513 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:30.785524 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:30.785541 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:30 GMT
	I1128 04:37:30.785548 1326355 round_trippers.go:580]     Audit-Id: d8a477b9-474c-40bf-beb0-11ad71e144ad
	I1128 04:37:30.785900 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:30.786326 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:31.281613 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:31.281641 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:31.281651 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:31.281658 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:31.284087 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:31.284108 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:31.284117 1326355 round_trippers.go:580]     Audit-Id: 6b6aa267-77c6-433f-8736-58e0c4d02c62
	I1128 04:37:31.284124 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:31.284130 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:31.284137 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:31.284143 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:31.284150 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:31 GMT
	I1128 04:37:31.284330 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:31.781090 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:31.781115 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:31.781126 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:31.781133 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:31.783804 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:31.783830 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:31.783839 1326355 round_trippers.go:580]     Audit-Id: 0d726e54-1f41-4fe0-a009-338cf5e6f6c1
	I1128 04:37:31.783846 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:31.783852 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:31.783858 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:31.783865 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:31.783872 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:31 GMT
	I1128 04:37:31.784273 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:32.281278 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:32.281303 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:32.281313 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:32.281320 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:32.283991 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:32.284026 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:32.284036 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:32.284043 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:32.284050 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:32 GMT
	I1128 04:37:32.284058 1326355 round_trippers.go:580]     Audit-Id: 5b5c2bf8-ada1-40d3-ab8c-779f9862af1f
	I1128 04:37:32.284065 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:32.284071 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:32.284373 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:32.781087 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:32.781113 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:32.781123 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:32.781132 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:32.783649 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:32.783673 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:32.783681 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:32.783688 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:32 GMT
	I1128 04:37:32.783695 1326355 round_trippers.go:580]     Audit-Id: 15f263a3-ba89-402f-a19e-c36099208dce
	I1128 04:37:32.783701 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:32.783707 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:32.783713 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:32.783844 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:33.281077 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:33.281103 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:33.281113 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:33.281120 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:33.283641 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:33.283667 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:33.283677 1326355 round_trippers.go:580]     Audit-Id: e19c22af-84aa-4994-b256-7d6a1c6e79df
	I1128 04:37:33.283684 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:33.283691 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:33.283697 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:33.283704 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:33.283715 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:33 GMT
	I1128 04:37:33.283831 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:33.284257 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:33.780991 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:33.781016 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:33.781026 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:33.781033 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:33.783772 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:33.783800 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:33.783809 1326355 round_trippers.go:580]     Audit-Id: 1dfb1d78-c0fc-4462-8ea2-0ed42c939663
	I1128 04:37:33.783825 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:33.783831 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:33.783842 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:33.783849 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:33.783859 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:33 GMT
	I1128 04:37:33.783958 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:34.281786 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:34.281815 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:34.281825 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:34.281832 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:34.284409 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:34.284431 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:34.284440 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:34.284447 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:34.284455 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:34.284461 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:34 GMT
	I1128 04:37:34.284467 1326355 round_trippers.go:580]     Audit-Id: 56148989-dd0c-4dc2-ae12-0dcc892d074a
	I1128 04:37:34.284473 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:34.284588 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:34.781308 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:34.781332 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:34.781341 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:34.781349 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:34.783791 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:34.783824 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:34.783834 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:34.783841 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:34.783847 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:34.783854 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:34.783866 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:34 GMT
	I1128 04:37:34.783875 1326355 round_trippers.go:580]     Audit-Id: 97ebeba5-18a3-40e8-b2c3-c7249e8ef0e1
	I1128 04:37:34.784116 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:35.281795 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:35.281820 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:35.281830 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:35.281838 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:35.284446 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:35.284467 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:35.284477 1326355 round_trippers.go:580]     Audit-Id: 076b4e92-3fcb-43e6-b487-d8ad2e5d6f90
	I1128 04:37:35.284483 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:35.284490 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:35.284496 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:35.284502 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:35.284509 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:35 GMT
	I1128 04:37:35.284643 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:35.285054 1326355 node_ready.go:58] node "multinode-448128" has status "Ready":"False"
	I1128 04:37:35.781378 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:35.781404 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:35.781413 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:35.781421 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:35.784111 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:35.784173 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:35.784184 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:35.784190 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:35.784204 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:35.784211 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:35 GMT
	I1128 04:37:35.784217 1326355 round_trippers.go:580]     Audit-Id: 39b8eff6-4311-42eb-993e-dac91789f182
	I1128 04:37:35.784224 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:35.784339 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:36.281887 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:36.281917 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:36.281935 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:36.281943 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:36.284631 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:36.284720 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:36.284735 1326355 round_trippers.go:580]     Audit-Id: 5eb0db74-f611-47ac-b133-f338662b9c46
	I1128 04:37:36.284748 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:36.284755 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:36.284762 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:36.284770 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:36.284777 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:36 GMT
	I1128 04:37:36.284928 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:36.781359 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:36.781385 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:36.781394 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:36.781402 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:36.784390 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:36.784415 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:36.784424 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:36.784431 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:36.784437 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:36.784443 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:36.784457 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:36 GMT
	I1128 04:37:36.784463 1326355 round_trippers.go:580]     Audit-Id: 6bf395b8-7fd4-4a42-97db-66638d7affc1
	I1128 04:37:36.784571 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"302","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1128 04:37:37.281812 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:37.281843 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.281853 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.281860 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.284639 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:37.284685 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.284712 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.284725 1326355 round_trippers.go:580]     Audit-Id: 6388247d-cda3-4f00-a7da-7b48b99b5716
	I1128 04:37:37.284733 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.284743 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.284750 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.284758 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.285148 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:37.286142 1326355 node_ready.go:49] node "multinode-448128" has status "Ready":"True"
	I1128 04:37:37.286168 1326355 node_ready.go:38] duration metric: took 31.063554203s waiting for node "multinode-448128" to be "Ready" ...
	I1128 04:37:37.286179 1326355 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:37:37.286257 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:37:37.286268 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.286277 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.286284 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.290072 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:37.290102 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.290112 1326355 round_trippers.go:580]     Audit-Id: 57d37918-8fb7-47cb-aba5-32f40f7774c5
	I1128 04:37:37.290119 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.290126 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.290132 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.290143 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.290153 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.290763 1326355 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"392"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"390","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1128 04:37:37.294742 1326355 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h99h4" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:37.294838 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h99h4
	I1128 04:37:37.294848 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.294857 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.294871 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.297997 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:37.298021 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.298030 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.298036 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.298043 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.298050 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.298057 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.298063 1326355 round_trippers.go:580]     Audit-Id: 8d6508e5-797c-48cb-b669-de1979e9efeb
	I1128 04:37:37.298452 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"390","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1128 04:37:37.298996 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:37.299015 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.299025 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.299032 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.301690 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:37.301715 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.301724 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.301731 1326355 round_trippers.go:580]     Audit-Id: 8b201f4b-1bb8-4526-adb8-f2fbc13130f0
	I1128 04:37:37.301737 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.301743 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.301750 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.301760 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.301981 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:37.302442 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h99h4
	I1128 04:37:37.302458 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.302468 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.302475 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.305106 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:37.305172 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.305195 1326355 round_trippers.go:580]     Audit-Id: a16218b8-c4e9-47b9-a55c-58add5bde000
	I1128 04:37:37.305219 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.305249 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.305258 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.305265 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.305271 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.305395 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"390","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1128 04:37:37.305920 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:37.305938 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.305947 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.305955 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.308435 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:37.308458 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.308468 1326355 round_trippers.go:580]     Audit-Id: 0b9cdfc9-97d2-466e-a92f-503e15ce6875
	I1128 04:37:37.308475 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.308481 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.308487 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.308494 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.308503 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.308898 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:37.810045 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h99h4
	I1128 04:37:37.810072 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.810082 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.810094 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.812740 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:37.812805 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.812826 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.812849 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.812882 1326355 round_trippers.go:580]     Audit-Id: 46b437ad-3c39-466c-ae6a-c6e88c7bda20
	I1128 04:37:37.812904 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.812922 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.812943 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.813080 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"390","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1128 04:37:37.813618 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:37.813634 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:37.813643 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:37.813653 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:37.816134 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:37.816156 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:37.816167 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:37.816173 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:37 GMT
	I1128 04:37:37.816180 1326355 round_trippers.go:580]     Audit-Id: c2e221da-32ef-49a7-9731-5b639fadb407
	I1128 04:37:37.816186 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:37.816193 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:37.816200 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:37.816390 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:38.309859 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h99h4
	I1128 04:37:38.309896 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.309911 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.309926 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.312728 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.312753 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.312761 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.312768 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.312775 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.312782 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.312788 1326355 round_trippers.go:580]     Audit-Id: e98ec04d-12ee-4a2b-8b2a-65387536c89a
	I1128 04:37:38.312794 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.313079 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"390","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1128 04:37:38.313643 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:38.313661 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.313671 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.313678 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.316244 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.316311 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.316335 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.316354 1326355 round_trippers.go:580]     Audit-Id: 2c782a64-acf6-4027-ae26-b3dac7327d66
	I1128 04:37:38.316393 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.316418 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.316434 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.316442 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.316618 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:38.809984 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h99h4
	I1128 04:37:38.810011 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.810030 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.810037 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.812845 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.812909 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.812933 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.812958 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.812990 1326355 round_trippers.go:580]     Audit-Id: 17b417ca-ef7f-4f54-9101-2088ee0be439
	I1128 04:37:38.813027 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.813046 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.813069 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.813214 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"404","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1128 04:37:38.813839 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:38.813862 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.813872 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.813882 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.816241 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.816264 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.816272 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.816279 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.816285 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.816299 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.816306 1326355 round_trippers.go:580]     Audit-Id: 2228c668-0076-419c-b9aa-2ae5d29e96f8
	I1128 04:37:38.816315 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.816560 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:38.817000 1326355 pod_ready.go:92] pod "coredns-5dd5756b68-h99h4" in "kube-system" namespace has status "Ready":"True"
	I1128 04:37:38.817018 1326355 pod_ready.go:81] duration metric: took 1.52224474s waiting for pod "coredns-5dd5756b68-h99h4" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.817031 1326355 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.817104 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-448128
	I1128 04:37:38.817112 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.817121 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.817128 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.819706 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.819729 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.819738 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.819746 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.819752 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.819759 1326355 round_trippers.go:580]     Audit-Id: 92696ed0-76e5-42c4-a6fd-9e3dc73a3291
	I1128 04:37:38.819765 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.819771 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.819998 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-448128","namespace":"kube-system","uid":"121c97bc-fd53-4694-a919-1df709813895","resourceVersion":"260","creationTimestamp":"2023-11-28T04:36:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"202ec28270a39c70a9db6e2ad9deefcf","kubernetes.io/config.mirror":"202ec28270a39c70a9db6e2ad9deefcf","kubernetes.io/config.seen":"2023-11-28T04:36:52.405621949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1128 04:37:38.820566 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:38.820585 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.820597 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.820609 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.823105 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.823129 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.823138 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.823144 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.823151 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.823158 1326355 round_trippers.go:580]     Audit-Id: 61384647-e4b9-469d-bc5d-cd2b4d251c0d
	I1128 04:37:38.823167 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.823173 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.823392 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:38.823807 1326355 pod_ready.go:92] pod "etcd-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:37:38.823826 1326355 pod_ready.go:81] duration metric: took 6.77846ms waiting for pod "etcd-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.823842 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.823912 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-448128
	I1128 04:37:38.823923 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.823932 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.823939 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.826548 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.826618 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.826652 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.826690 1326355 round_trippers.go:580]     Audit-Id: 526b9e7d-a691-4b6c-9b2d-794c27f189a3
	I1128 04:37:38.826721 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.826743 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.826785 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.826822 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.827048 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-448128","namespace":"kube-system","uid":"ed60cc18-21a7-4a58-b1bf-929498ac7681","resourceVersion":"257","creationTimestamp":"2023-11-28T04:36:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"c0a401a1d2128f26b499573a61001053","kubernetes.io/config.mirror":"c0a401a1d2128f26b499573a61001053","kubernetes.io/config.seen":"2023-11-28T04:36:52.405629071Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1128 04:37:38.827772 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:38.827803 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.827812 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.827819 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.830646 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.830679 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.830694 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.830701 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.830711 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.830717 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.830733 1326355 round_trippers.go:580]     Audit-Id: e7f54516-6652-4504-a6e8-06bfc5810ded
	I1128 04:37:38.830740 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.831216 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:38.831676 1326355 pod_ready.go:92] pod "kube-apiserver-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:37:38.831720 1326355 pod_ready.go:81] duration metric: took 7.865371ms waiting for pod "kube-apiserver-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.831738 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.831803 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-448128
	I1128 04:37:38.831814 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.831822 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.831829 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.834548 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.834577 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.834593 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.834600 1326355 round_trippers.go:580]     Audit-Id: 2468962f-d9b0-4e6c-b911-da73e718dd65
	I1128 04:37:38.834607 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.834626 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.834634 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.834652 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.834858 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-448128","namespace":"kube-system","uid":"b57f8849-158b-426b-ada5-bb5ea7e23ec8","resourceVersion":"256","creationTimestamp":"2023-11-28T04:36:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2900dc80bd738cc7d6eb7e628235b5db","kubernetes.io/config.mirror":"2900dc80bd738cc7d6eb7e628235b5db","kubernetes.io/config.seen":"2023-11-28T04:36:52.405630704Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1128 04:37:38.882739 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:38.882764 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:38.882774 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:38.882781 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:38.885468 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:38.885538 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:38.885561 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:38.885584 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:38.885623 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:38.885650 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:38 GMT
	I1128 04:37:38.885665 1326355 round_trippers.go:580]     Audit-Id: d282be39-b6e1-4083-be50-4f9433b7cf9a
	I1128 04:37:38.885672 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:38.885819 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:38.886228 1326355 pod_ready.go:92] pod "kube-controller-manager-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:37:38.886248 1326355 pod_ready.go:81] duration metric: took 54.501368ms waiting for pod "kube-controller-manager-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:38.886260 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mskz2" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:39.082707 1326355 request.go:629] Waited for 196.384461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mskz2
	I1128 04:37:39.082794 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mskz2
	I1128 04:37:39.082801 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:39.082809 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:39.082821 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:39.085431 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:39.085501 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:39.085523 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:39.085547 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:39.085588 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:39.085617 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:39.085639 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:39 GMT
	I1128 04:37:39.085661 1326355 round_trippers.go:580]     Audit-Id: f79ca287-36a1-4b58-8838-aba2a1a3bdf4
	I1128 04:37:39.086220 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mskz2","generateName":"kube-proxy-","namespace":"kube-system","uid":"36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465","resourceVersion":"371","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a7620ae4-04a3-4706-a2ec-b3cd57b95023","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a7620ae4-04a3-4706-a2ec-b3cd57b95023\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1128 04:37:39.281904 1326355 request.go:629] Waited for 195.147987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:39.281985 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:39.281995 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:39.282004 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:39.282011 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:39.284735 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:39.284798 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:39.284820 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:39 GMT
	I1128 04:37:39.284843 1326355 round_trippers.go:580]     Audit-Id: b7842557-61e8-45be-a8e6-6745003bbd57
	I1128 04:37:39.284877 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:39.284902 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:39.284922 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:39.284936 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:39.285074 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:39.285552 1326355 pod_ready.go:92] pod "kube-proxy-mskz2" in "kube-system" namespace has status "Ready":"True"
	I1128 04:37:39.285570 1326355 pod_ready.go:81] duration metric: took 399.30342ms waiting for pod "kube-proxy-mskz2" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:39.285584 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:39.481904 1326355 request.go:629] Waited for 196.254484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-448128
	I1128 04:37:39.481997 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-448128
	I1128 04:37:39.482003 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:39.482012 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:39.482023 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:39.485218 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:39.485257 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:39.485266 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:39.485273 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:39.485279 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:39 GMT
	I1128 04:37:39.485286 1326355 round_trippers.go:580]     Audit-Id: e631575c-430b-4c30-bcf2-e4802272b4e5
	I1128 04:37:39.485292 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:39.485298 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:39.485482 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-448128","namespace":"kube-system","uid":"4857acb0-f079-4948-b7d9-68e443c97acb","resourceVersion":"289","creationTimestamp":"2023-11-28T04:36:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f23caa0ce4cee36da4381e5cce72405","kubernetes.io/config.mirror":"8f23caa0ce4cee36da4381e5cce72405","kubernetes.io/config.seen":"2023-11-28T04:36:44.411368392Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1128 04:37:39.682257 1326355 request.go:629] Waited for 196.333926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:39.682356 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:37:39.682365 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:39.682374 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:39.682381 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:39.684953 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:39.685012 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:39.685027 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:39 GMT
	I1128 04:37:39.685035 1326355 round_trippers.go:580]     Audit-Id: 9298327c-9cf8-4441-b497-fcc8ff676976
	I1128 04:37:39.685042 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:39.685049 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:39.685055 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:39.685061 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:39.685455 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:37:39.685855 1326355 pod_ready.go:92] pod "kube-scheduler-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:37:39.685881 1326355 pod_ready.go:81] duration metric: took 400.289942ms waiting for pod "kube-scheduler-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:37:39.685894 1326355 pod_ready.go:38] duration metric: took 2.399700975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:37:39.685916 1326355 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:37:39.685980 1326355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:37:39.698100 1326355 command_runner.go:130] > 1275
	I1128 04:37:39.699418 1326355 api_server.go:72] duration metric: took 33.563760972s to wait for apiserver process to appear ...
	I1128 04:37:39.699442 1326355 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:37:39.699460 1326355 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1128 04:37:39.708349 1326355 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1128 04:37:39.708422 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1128 04:37:39.708432 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:39.708442 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:39.708454 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:39.709727 1326355 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 04:37:39.709749 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:39.709765 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:39 GMT
	I1128 04:37:39.709772 1326355 round_trippers.go:580]     Audit-Id: 7ad471af-1da2-4a4b-b5a2-e77d83c4d9e2
	I1128 04:37:39.709780 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:39.709786 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:39.709796 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:39.709803 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:39.709811 1326355 round_trippers.go:580]     Content-Length: 264
	I1128 04:37:39.709829 1326355 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1128 04:37:39.709936 1326355 api_server.go:141] control plane version: v1.28.4
	I1128 04:37:39.709959 1326355 api_server.go:131] duration metric: took 10.510585ms to wait for apiserver health ...
	I1128 04:37:39.709967 1326355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:37:39.882379 1326355 request.go:629] Waited for 172.324834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:37:39.882456 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:37:39.882468 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:39.882477 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:39.882489 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:39.885961 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:39.886012 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:39.886041 1326355 round_trippers.go:580]     Audit-Id: a3e08e7c-42a8-4285-aefa-2434e5e510b5
	I1128 04:37:39.886062 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:39.886081 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:39.886091 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:39.886109 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:39.886119 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:39 GMT
	I1128 04:37:39.886709 1326355 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"404","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1128 04:37:39.889095 1326355 system_pods.go:59] 8 kube-system pods found
	I1128 04:37:39.889128 1326355 system_pods.go:61] "coredns-5dd5756b68-h99h4" [770a2e4e-e096-47e0-81a9-0623bbaa4825] Running
	I1128 04:37:39.889135 1326355 system_pods.go:61] "etcd-multinode-448128" [121c97bc-fd53-4694-a919-1df709813895] Running
	I1128 04:37:39.889140 1326355 system_pods.go:61] "kindnet-9lv68" [8b64f475-0c18-4eeb-9cf5-99cfc90e09c6] Running
	I1128 04:37:39.889145 1326355 system_pods.go:61] "kube-apiserver-multinode-448128" [ed60cc18-21a7-4a58-b1bf-929498ac7681] Running
	I1128 04:37:39.889152 1326355 system_pods.go:61] "kube-controller-manager-multinode-448128" [b57f8849-158b-426b-ada5-bb5ea7e23ec8] Running
	I1128 04:37:39.889186 1326355 system_pods.go:61] "kube-proxy-mskz2" [36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465] Running
	I1128 04:37:39.889198 1326355 system_pods.go:61] "kube-scheduler-multinode-448128" [4857acb0-f079-4948-b7d9-68e443c97acb] Running
	I1128 04:37:39.889204 1326355 system_pods.go:61] "storage-provisioner" [d0447b95-909c-4274-82c0-d916436e0f3e] Running
	I1128 04:37:39.889210 1326355 system_pods.go:74] duration metric: took 179.237684ms to wait for pod list to return data ...
	I1128 04:37:39.889222 1326355 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:37:40.082659 1326355 request.go:629] Waited for 193.36055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1128 04:37:40.082744 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1128 04:37:40.082772 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:40.082787 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:40.082795 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:40.085720 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:40.085762 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:40.085773 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:40 GMT
	I1128 04:37:40.085780 1326355 round_trippers.go:580]     Audit-Id: 58f93c42-97ac-4f48-8517-27be36e19bd4
	I1128 04:37:40.085787 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:40.085793 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:40.085805 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:40.085812 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:40.085823 1326355 round_trippers.go:580]     Content-Length: 261
	I1128 04:37:40.085857 1326355 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"313c9ba9-3a5b-41f5-9f5e-43158a021597","resourceVersion":"309","creationTimestamp":"2023-11-28T04:37:05Z"}}]}
	I1128 04:37:40.086111 1326355 default_sa.go:45] found service account: "default"
	I1128 04:37:40.086133 1326355 default_sa.go:55] duration metric: took 196.903769ms for default service account to be created ...
	I1128 04:37:40.086152 1326355 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:37:40.282568 1326355 request.go:629] Waited for 196.331957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:37:40.282644 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:37:40.282655 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:40.282665 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:40.282676 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:40.286173 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:40.286237 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:40.286260 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:40.286274 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:40.286280 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:40 GMT
	I1128 04:37:40.286287 1326355 round_trippers.go:580]     Audit-Id: c92b3610-2de9-46d4-baaa-8ab4aaedd3c3
	I1128 04:37:40.286294 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:40.286300 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:40.287202 1326355 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"404","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1128 04:37:40.289625 1326355 system_pods.go:86] 8 kube-system pods found
	I1128 04:37:40.289656 1326355 system_pods.go:89] "coredns-5dd5756b68-h99h4" [770a2e4e-e096-47e0-81a9-0623bbaa4825] Running
	I1128 04:37:40.289664 1326355 system_pods.go:89] "etcd-multinode-448128" [121c97bc-fd53-4694-a919-1df709813895] Running
	I1128 04:37:40.289669 1326355 system_pods.go:89] "kindnet-9lv68" [8b64f475-0c18-4eeb-9cf5-99cfc90e09c6] Running
	I1128 04:37:40.289674 1326355 system_pods.go:89] "kube-apiserver-multinode-448128" [ed60cc18-21a7-4a58-b1bf-929498ac7681] Running
	I1128 04:37:40.289680 1326355 system_pods.go:89] "kube-controller-manager-multinode-448128" [b57f8849-158b-426b-ada5-bb5ea7e23ec8] Running
	I1128 04:37:40.289685 1326355 system_pods.go:89] "kube-proxy-mskz2" [36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465] Running
	I1128 04:37:40.289690 1326355 system_pods.go:89] "kube-scheduler-multinode-448128" [4857acb0-f079-4948-b7d9-68e443c97acb] Running
	I1128 04:37:40.289694 1326355 system_pods.go:89] "storage-provisioner" [d0447b95-909c-4274-82c0-d916436e0f3e] Running
	I1128 04:37:40.289703 1326355 system_pods.go:126] duration metric: took 203.545672ms to wait for k8s-apps to be running ...
	I1128 04:37:40.289716 1326355 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:37:40.289778 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:37:40.303922 1326355 system_svc.go:56] duration metric: took 14.195284ms WaitForService to wait for kubelet.
	I1128 04:37:40.303949 1326355 kubeadm.go:581] duration metric: took 34.168296676s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:37:40.303987 1326355 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:37:40.482378 1326355 request.go:629] Waited for 178.312077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1128 04:37:40.482435 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1128 04:37:40.482441 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:40.482449 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:40.482456 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:40.485432 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:40.485457 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:40.485467 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:40.485473 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:40.485479 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:40.485487 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:40.485493 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:40 GMT
	I1128 04:37:40.485500 1326355 round_trippers.go:580]     Audit-Id: 6ebe4741-e2e7-4e9f-b34e-3b1f10fe9622
	I1128 04:37:40.485794 1326355 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1128 04:37:40.486249 1326355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:37:40.486278 1326355 node_conditions.go:123] node cpu capacity is 2
	I1128 04:37:40.486291 1326355 node_conditions.go:105] duration metric: took 182.291175ms to run NodePressure ...
	I1128 04:37:40.486304 1326355 start.go:228] waiting for startup goroutines ...
	I1128 04:37:40.486315 1326355 start.go:233] waiting for cluster config update ...
	I1128 04:37:40.486327 1326355 start.go:242] writing updated cluster config ...
	I1128 04:37:40.488810 1326355 out.go:177] 
	I1128 04:37:40.490936 1326355 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:37:40.491079 1326355 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/config.json ...
	I1128 04:37:40.493617 1326355 out.go:177] * Starting worker node multinode-448128-m02 in cluster multinode-448128
	I1128 04:37:40.495346 1326355 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:37:40.497073 1326355 out.go:177] * Pulling base image ...
	I1128 04:37:40.499869 1326355 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:37:40.499898 1326355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:37:40.499974 1326355 cache.go:56] Caching tarball of preloaded images
	I1128 04:37:40.500062 1326355 preload.go:174] Found /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 04:37:40.500072 1326355 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:37:40.500162 1326355 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/config.json ...
	I1128 04:37:40.518348 1326355 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 04:37:40.518374 1326355 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 04:37:40.518396 1326355 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:37:40.518429 1326355 start.go:365] acquiring machines lock for multinode-448128-m02: {Name:mkf53c73f885cd0f9c44581cc107e6631dec0b8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:37:40.518566 1326355 start.go:369] acquired machines lock for "multinode-448128-m02" in 119.334µs
	I1128 04:37:40.518595 1326355 start.go:93] Provisioning new machine with config: &{Name:multinode-448128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 04:37:40.518684 1326355 start.go:125] createHost starting for "m02" (driver="docker")
	I1128 04:37:40.522197 1326355 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1128 04:37:40.522322 1326355 start.go:159] libmachine.API.Create for "multinode-448128" (driver="docker")
	I1128 04:37:40.522351 1326355 client.go:168] LocalClient.Create starting
	I1128 04:37:40.522423 1326355 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:37:40.522463 1326355 main.go:141] libmachine: Decoding PEM data...
	I1128 04:37:40.522480 1326355 main.go:141] libmachine: Parsing certificate...
	I1128 04:37:40.522536 1326355 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:37:40.522574 1326355 main.go:141] libmachine: Decoding PEM data...
	I1128 04:37:40.522590 1326355 main.go:141] libmachine: Parsing certificate...
	I1128 04:37:40.522860 1326355 cli_runner.go:164] Run: docker network inspect multinode-448128 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:37:40.544622 1326355 network_create.go:77] Found existing network {name:multinode-448128 subnet:0x40027b9e90 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1128 04:37:40.544689 1326355 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-448128-m02" container
	I1128 04:37:40.544773 1326355 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:37:40.563713 1326355 cli_runner.go:164] Run: docker volume create multinode-448128-m02 --label name.minikube.sigs.k8s.io=multinode-448128-m02 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:37:40.584015 1326355 oci.go:103] Successfully created a docker volume multinode-448128-m02
	I1128 04:37:40.584107 1326355 cli_runner.go:164] Run: docker run --rm --name multinode-448128-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-448128-m02 --entrypoint /usr/bin/test -v multinode-448128-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 04:37:41.172989 1326355 oci.go:107] Successfully prepared a docker volume multinode-448128-m02
	I1128 04:37:41.173028 1326355 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:37:41.173049 1326355 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 04:37:41.173132 1326355 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-448128-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 04:37:45.623967 1326355 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-448128-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.450786627s)
	I1128 04:37:45.623999 1326355 kic.go:203] duration metric: took 4.450947 seconds to extract preloaded images to volume
	W1128 04:37:45.624134 1326355 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:37:45.624254 1326355 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:37:45.697145 1326355 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-448128-m02 --name multinode-448128-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-448128-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-448128-m02 --network multinode-448128 --ip 192.168.58.3 --volume multinode-448128-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 04:37:46.093223 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128-m02 --format={{.State.Running}}
	I1128 04:37:46.125921 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128-m02 --format={{.State.Status}}
	I1128 04:37:46.160628 1326355 cli_runner.go:164] Run: docker exec multinode-448128-m02 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:37:46.239326 1326355 oci.go:144] the created container "multinode-448128-m02" has a running status.
	I1128 04:37:46.239353 1326355 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa...
	I1128 04:37:46.790541 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1128 04:37:46.790609 1326355 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 04:37:46.839298 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128-m02 --format={{.State.Status}}
	I1128 04:37:46.881009 1326355 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 04:37:46.881030 1326355 kic_runner.go:114] Args: [docker exec --privileged multinode-448128-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 04:37:46.977754 1326355 cli_runner.go:164] Run: docker container inspect multinode-448128-m02 --format={{.State.Status}}
	I1128 04:37:47.007275 1326355 machine.go:88] provisioning docker machine ...
	I1128 04:37:47.007318 1326355 ubuntu.go:169] provisioning hostname "multinode-448128-m02"
	I1128 04:37:47.007406 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:47.042375 1326355 main.go:141] libmachine: Using SSH client type: native
	I1128 04:37:47.042949 1326355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34404 <nil> <nil>}
	I1128 04:37:47.042966 1326355 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-448128-m02 && echo "multinode-448128-m02" | sudo tee /etc/hostname
	I1128 04:37:47.246923 1326355 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-448128-m02
	
	I1128 04:37:47.247023 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:47.273190 1326355 main.go:141] libmachine: Using SSH client type: native
	I1128 04:37:47.273631 1326355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34404 <nil> <nil>}
	I1128 04:37:47.273657 1326355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-448128-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-448128-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-448128-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:37:47.422411 1326355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:37:47.422445 1326355 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:37:47.422463 1326355 ubuntu.go:177] setting up certificates
	I1128 04:37:47.422472 1326355 provision.go:83] configureAuth start
	I1128 04:37:47.422544 1326355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128-m02
	I1128 04:37:47.450079 1326355 provision.go:138] copyHostCerts
	I1128 04:37:47.450118 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:37:47.450149 1326355 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:37:47.450156 1326355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:37:47.450231 1326355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:37:47.450300 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:37:47.450317 1326355 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:37:47.450321 1326355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:37:47.450345 1326355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:37:47.450380 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:37:47.450397 1326355 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:37:47.450401 1326355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:37:47.450432 1326355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:37:47.450473 1326355 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.multinode-448128-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-448128-m02]
	I1128 04:37:48.442052 1326355 provision.go:172] copyRemoteCerts
	I1128 04:37:48.442124 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:37:48.442169 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:48.465577 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34404 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa Username:docker}
	I1128 04:37:48.567693 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 04:37:48.567814 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1128 04:37:48.599116 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 04:37:48.599177 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:37:48.630127 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 04:37:48.630191 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:37:48.660936 1326355 provision.go:86] duration metric: configureAuth took 1.238450484s
	I1128 04:37:48.660965 1326355 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:37:48.661173 1326355 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:37:48.661291 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:48.680217 1326355 main.go:141] libmachine: Using SSH client type: native
	I1128 04:37:48.680697 1326355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34404 <nil> <nil>}
	I1128 04:37:48.680718 1326355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:37:48.940603 1326355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:37:48.940624 1326355 machine.go:91] provisioned docker machine in 1.93331848s
	I1128 04:37:48.940634 1326355 client.go:171] LocalClient.Create took 8.418275012s
	I1128 04:37:48.940674 1326355 start.go:167] duration metric: libmachine.API.Create for "multinode-448128" took 8.418333121s
	I1128 04:37:48.940683 1326355 start.go:300] post-start starting for "multinode-448128-m02" (driver="docker")
	I1128 04:37:48.940692 1326355 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:37:48.940799 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:37:48.940853 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:48.960388 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34404 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa Username:docker}
	I1128 04:37:49.060863 1326355 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:37:49.065020 1326355 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1128 04:37:49.065046 1326355 command_runner.go:130] > NAME="Ubuntu"
	I1128 04:37:49.065053 1326355 command_runner.go:130] > VERSION_ID="22.04"
	I1128 04:37:49.065060 1326355 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1128 04:37:49.065066 1326355 command_runner.go:130] > VERSION_CODENAME=jammy
	I1128 04:37:49.065070 1326355 command_runner.go:130] > ID=ubuntu
	I1128 04:37:49.065075 1326355 command_runner.go:130] > ID_LIKE=debian
	I1128 04:37:49.065081 1326355 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1128 04:37:49.065087 1326355 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1128 04:37:49.065100 1326355 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1128 04:37:49.065112 1326355 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1128 04:37:49.065121 1326355 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1128 04:37:49.065180 1326355 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:37:49.065206 1326355 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:37:49.065223 1326355 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:37:49.065231 1326355 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 04:37:49.065244 1326355 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:37:49.065300 1326355 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:37:49.065383 1326355 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:37:49.065393 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> /etc/ssl/certs/12614152.pem
	I1128 04:37:49.065509 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:37:49.076221 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:37:49.108256 1326355 start.go:303] post-start completed in 167.554692ms
	I1128 04:37:49.108709 1326355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128-m02
	I1128 04:37:49.130066 1326355 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/config.json ...
	I1128 04:37:49.130378 1326355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:37:49.130435 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:49.149503 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34404 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa Username:docker}
	I1128 04:37:49.243188 1326355 command_runner.go:130] > 19%!
	(MISSING)I1128 04:37:49.243278 1326355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:37:49.248922 1326355 command_runner.go:130] > 159G
	I1128 04:37:49.249503 1326355 start.go:128] duration metric: createHost completed in 8.730804973s
	I1128 04:37:49.249521 1326355 start.go:83] releasing machines lock for "multinode-448128-m02", held for 8.730945092s
	I1128 04:37:49.249599 1326355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128-m02
	I1128 04:37:49.270783 1326355 out.go:177] * Found network options:
	I1128 04:37:49.272606 1326355 out.go:177]   - NO_PROXY=192.168.58.2
	W1128 04:37:49.274706 1326355 proxy.go:119] fail to check proxy env: Error ip not in block
	W1128 04:37:49.274760 1326355 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 04:37:49.274841 1326355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:37:49.274887 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:49.275161 1326355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:37:49.275213 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:37:49.296003 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34404 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa Username:docker}
	I1128 04:37:49.306249 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34404 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa Username:docker}
	I1128 04:37:49.558154 1326355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:37:49.558305 1326355 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 04:37:49.563719 1326355 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1128 04:37:49.563787 1326355 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1128 04:37:49.563808 1326355 command_runner.go:130] > Device: b3h/179d	Inode: 5449282     Links: 1
	I1128 04:37:49.563823 1326355 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 04:37:49.563831 1326355 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1128 04:37:49.563838 1326355 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1128 04:37:49.563856 1326355 command_runner.go:130] > Change: 2023-11-28 04:13:24.237847244 +0000
	I1128 04:37:49.563868 1326355 command_runner.go:130] >  Birth: 2023-11-28 04:13:24.237847244 +0000
	I1128 04:37:49.564298 1326355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:37:49.588101 1326355 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:37:49.588225 1326355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:37:49.624732 1326355 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1128 04:37:49.624821 1326355 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1128 04:37:49.624847 1326355 start.go:472] detecting cgroup driver to use...
	I1128 04:37:49.624895 1326355 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:37:49.625016 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:37:49.644114 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:37:49.658179 1326355 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:37:49.658295 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:37:49.674160 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:37:49.692296 1326355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:37:49.789457 1326355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:37:49.899198 1326355 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1128 04:37:49.899236 1326355 docker.go:219] disabling docker service ...
	I1128 04:37:49.899293 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:37:49.922568 1326355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:37:49.937761 1326355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:37:50.042114 1326355 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1128 04:37:50.042252 1326355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:37:50.156075 1326355 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1128 04:37:50.156201 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:37:50.170989 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:37:50.192612 1326355 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 04:37:50.194432 1326355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:37:50.194548 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:37:50.208835 1326355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:37:50.208932 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:37:50.221331 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:37:50.235256 1326355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:37:50.248284 1326355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:37:50.260790 1326355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:37:50.271926 1326355 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1128 04:37:50.272050 1326355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:37:50.282563 1326355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:37:50.372497 1326355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:37:50.507057 1326355 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:37:50.507186 1326355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:37:50.512089 1326355 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 04:37:50.512152 1326355 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 04:37:50.512176 1326355 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1128 04:37:50.512199 1326355 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 04:37:50.512223 1326355 command_runner.go:130] > Access: 2023-11-28 04:37:50.490218685 +0000
	I1128 04:37:50.512256 1326355 command_runner.go:130] > Modify: 2023-11-28 04:37:50.490218685 +0000
	I1128 04:37:50.512278 1326355 command_runner.go:130] > Change: 2023-11-28 04:37:50.490218685 +0000
	I1128 04:37:50.512295 1326355 command_runner.go:130] >  Birth: -
	I1128 04:37:50.512602 1326355 start.go:540] Will wait 60s for crictl version
	I1128 04:37:50.512712 1326355 ssh_runner.go:195] Run: which crictl
	I1128 04:37:50.517652 1326355 command_runner.go:130] > /usr/bin/crictl
	I1128 04:37:50.518013 1326355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:37:50.558101 1326355 command_runner.go:130] > Version:  0.1.0
	I1128 04:37:50.558173 1326355 command_runner.go:130] > RuntimeName:  cri-o
	I1128 04:37:50.558194 1326355 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1128 04:37:50.558217 1326355 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 04:37:50.560956 1326355 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 04:37:50.561088 1326355 ssh_runner.go:195] Run: crio --version
	I1128 04:37:50.605621 1326355 command_runner.go:130] > crio version 1.24.6
	I1128 04:37:50.605702 1326355 command_runner.go:130] > Version:          1.24.6
	I1128 04:37:50.605726 1326355 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1128 04:37:50.605748 1326355 command_runner.go:130] > GitTreeState:     clean
	I1128 04:37:50.605774 1326355 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1128 04:37:50.605804 1326355 command_runner.go:130] > GoVersion:        go1.18.2
	I1128 04:37:50.605824 1326355 command_runner.go:130] > Compiler:         gc
	I1128 04:37:50.605843 1326355 command_runner.go:130] > Platform:         linux/arm64
	I1128 04:37:50.605865 1326355 command_runner.go:130] > Linkmode:         dynamic
	I1128 04:37:50.605897 1326355 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 04:37:50.605918 1326355 command_runner.go:130] > SeccompEnabled:   true
	I1128 04:37:50.605936 1326355 command_runner.go:130] > AppArmorEnabled:  false
	I1128 04:37:50.608524 1326355 ssh_runner.go:195] Run: crio --version
	I1128 04:37:50.650830 1326355 command_runner.go:130] > crio version 1.24.6
	I1128 04:37:50.650856 1326355 command_runner.go:130] > Version:          1.24.6
	I1128 04:37:50.650866 1326355 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1128 04:37:50.650880 1326355 command_runner.go:130] > GitTreeState:     clean
	I1128 04:37:50.650888 1326355 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1128 04:37:50.650909 1326355 command_runner.go:130] > GoVersion:        go1.18.2
	I1128 04:37:50.650920 1326355 command_runner.go:130] > Compiler:         gc
	I1128 04:37:50.650931 1326355 command_runner.go:130] > Platform:         linux/arm64
	I1128 04:37:50.650937 1326355 command_runner.go:130] > Linkmode:         dynamic
	I1128 04:37:50.650960 1326355 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 04:37:50.650972 1326355 command_runner.go:130] > SeccompEnabled:   true
	I1128 04:37:50.650978 1326355 command_runner.go:130] > AppArmorEnabled:  false
	I1128 04:37:50.656581 1326355 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1128 04:37:50.658245 1326355 out.go:177]   - env NO_PROXY=192.168.58.2
	I1128 04:37:50.659666 1326355 cli_runner.go:164] Run: docker network inspect multinode-448128 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:37:50.678897 1326355 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1128 04:37:50.683811 1326355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:37:50.698387 1326355 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128 for IP: 192.168.58.3
	I1128 04:37:50.698425 1326355 certs.go:190] acquiring lock for shared ca certs: {Name:mka7cf71bac87c390cad9bb03b67c849db7103ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:37:50.698567 1326355 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key
	I1128 04:37:50.698612 1326355 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key
	I1128 04:37:50.698627 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 04:37:50.698646 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 04:37:50.698661 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 04:37:50.698673 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 04:37:50.698733 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem (1338 bytes)
	W1128 04:37:50.698775 1326355 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415_empty.pem, impossibly tiny 0 bytes
	I1128 04:37:50.698789 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:37:50.698818 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem (1082 bytes)
	I1128 04:37:50.698848 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:37:50.698875 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem (1679 bytes)
	I1128 04:37:50.698926 1326355 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:37:50.698981 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> /usr/share/ca-certificates/12614152.pem
	I1128 04:37:50.698998 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:37:50.699009 1326355 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem -> /usr/share/ca-certificates/1261415.pem
	I1128 04:37:50.699444 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:37:50.729542 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:37:50.758882 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:37:50.791855 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1128 04:37:50.821608 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /usr/share/ca-certificates/12614152.pem (1708 bytes)
	I1128 04:37:50.850668 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:37:50.880464 1326355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem --> /usr/share/ca-certificates/1261415.pem (1338 bytes)
	I1128 04:37:50.912029 1326355 ssh_runner.go:195] Run: openssl version
	I1128 04:37:50.918933 1326355 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1128 04:37:50.919070 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1261415.pem && ln -fs /usr/share/ca-certificates/1261415.pem /etc/ssl/certs/1261415.pem"
	I1128 04:37:50.930770 1326355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261415.pem
	I1128 04:37:50.935450 1326355 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 04:21 /usr/share/ca-certificates/1261415.pem
	I1128 04:37:50.935560 1326355 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 04:21 /usr/share/ca-certificates/1261415.pem
	I1128 04:37:50.935623 1326355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261415.pem
	I1128 04:37:50.943943 1326355 command_runner.go:130] > 51391683
	I1128 04:37:50.944337 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1261415.pem /etc/ssl/certs/51391683.0"
	I1128 04:37:50.955970 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12614152.pem && ln -fs /usr/share/ca-certificates/12614152.pem /etc/ssl/certs/12614152.pem"
	I1128 04:37:50.967774 1326355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12614152.pem
	I1128 04:37:50.973440 1326355 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 04:21 /usr/share/ca-certificates/12614152.pem
	I1128 04:37:50.973518 1326355 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 04:21 /usr/share/ca-certificates/12614152.pem
	I1128 04:37:50.973600 1326355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12614152.pem
	I1128 04:37:50.981920 1326355 command_runner.go:130] > 3ec20f2e
	I1128 04:37:50.982289 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12614152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:37:50.994705 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:37:51.016621 1326355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:37:51.022052 1326355 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:37:51.022084 1326355 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:37:51.022147 1326355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:37:51.031059 1326355 command_runner.go:130] > b5213941
	I1128 04:37:51.031602 1326355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:37:51.051258 1326355 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:37:51.061745 1326355 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 04:37:51.061838 1326355 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 04:37:51.061994 1326355 ssh_runner.go:195] Run: crio config
	I1128 04:37:51.118016 1326355 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 04:37:51.118117 1326355 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 04:37:51.118198 1326355 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 04:37:51.118231 1326355 command_runner.go:130] > #
	I1128 04:37:51.118268 1326355 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 04:37:51.118293 1326355 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 04:37:51.118337 1326355 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 04:37:51.118367 1326355 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 04:37:51.118389 1326355 command_runner.go:130] > # reload'.
	I1128 04:37:51.118426 1326355 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 04:37:51.118466 1326355 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 04:37:51.118521 1326355 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 04:37:51.118548 1326355 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 04:37:51.118591 1326355 command_runner.go:130] > [crio]
	I1128 04:37:51.118630 1326355 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 04:37:51.118673 1326355 command_runner.go:130] > # containers images, in this directory.
	I1128 04:37:51.118716 1326355 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1128 04:37:51.118836 1326355 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 04:37:51.118872 1326355 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1128 04:37:51.118895 1326355 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 04:37:51.118918 1326355 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 04:37:51.118951 1326355 command_runner.go:130] > # storage_driver = "vfs"
	I1128 04:37:51.118995 1326355 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 04:37:51.119027 1326355 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 04:37:51.119066 1326355 command_runner.go:130] > # storage_option = [
	I1128 04:37:51.119089 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.119110 1326355 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 04:37:51.119133 1326355 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 04:37:51.119660 1326355 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 04:37:51.119730 1326355 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 04:37:51.119759 1326355 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 04:37:51.119804 1326355 command_runner.go:130] > # always happen on a node reboot
	I1128 04:37:51.119839 1326355 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 04:37:51.119865 1326355 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 04:37:51.119902 1326355 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 04:37:51.119942 1326355 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 04:37:51.119977 1326355 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 04:37:51.120015 1326355 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 04:37:51.120052 1326355 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 04:37:51.120087 1326355 command_runner.go:130] > # internal_wipe = true
	I1128 04:37:51.120122 1326355 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 04:37:51.120161 1326355 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 04:37:51.120192 1326355 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 04:37:51.120292 1326355 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 04:37:51.120338 1326355 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 04:37:51.120381 1326355 command_runner.go:130] > [crio.api]
	I1128 04:37:51.120423 1326355 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 04:37:51.120456 1326355 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 04:37:51.120480 1326355 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 04:37:51.120510 1326355 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 04:37:51.120581 1326355 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 04:37:51.120615 1326355 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 04:37:51.120636 1326355 command_runner.go:130] > # stream_port = "0"
	I1128 04:37:51.120684 1326355 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 04:37:51.120729 1326355 command_runner.go:130] > # stream_enable_tls = false
	I1128 04:37:51.120759 1326355 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 04:37:51.120798 1326355 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 04:37:51.120831 1326355 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 04:37:51.120861 1326355 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 04:37:51.120879 1326355 command_runner.go:130] > # minutes.
	I1128 04:37:51.120918 1326355 command_runner.go:130] > # stream_tls_cert = ""
	I1128 04:37:51.120950 1326355 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 04:37:51.120997 1326355 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 04:37:51.121026 1326355 command_runner.go:130] > # stream_tls_key = ""
	I1128 04:37:51.121071 1326355 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 04:37:51.121108 1326355 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 04:37:51.121156 1326355 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 04:37:51.121190 1326355 command_runner.go:130] > # stream_tls_ca = ""
	I1128 04:37:51.121232 1326355 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 04:37:51.121265 1326355 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1128 04:37:51.121292 1326355 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 04:37:51.121325 1326355 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1128 04:37:51.121391 1326355 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 04:37:51.121429 1326355 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 04:37:51.121451 1326355 command_runner.go:130] > [crio.runtime]
	I1128 04:37:51.121480 1326355 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 04:37:51.121532 1326355 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 04:37:51.121568 1326355 command_runner.go:130] > # "nofile=1024:2048"
	I1128 04:37:51.121602 1326355 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 04:37:51.121642 1326355 command_runner.go:130] > # default_ulimits = [
	I1128 04:37:51.121670 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.121700 1326355 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 04:37:51.121734 1326355 command_runner.go:130] > # no_pivot = false
	I1128 04:37:51.121760 1326355 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 04:37:51.121805 1326355 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 04:37:51.121852 1326355 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 04:37:51.121885 1326355 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 04:37:51.121906 1326355 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 04:37:51.121972 1326355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 04:37:51.122036 1326355 command_runner.go:130] > # conmon = ""
	I1128 04:37:51.122114 1326355 command_runner.go:130] > # Cgroup setting for conmon
	I1128 04:37:51.122155 1326355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 04:37:51.122191 1326355 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 04:37:51.122233 1326355 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 04:37:51.122262 1326355 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 04:37:51.122293 1326355 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 04:37:51.122313 1326355 command_runner.go:130] > # conmon_env = [
	I1128 04:37:51.122349 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.122377 1326355 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 04:37:51.122406 1326355 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 04:37:51.122429 1326355 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 04:37:51.122457 1326355 command_runner.go:130] > # default_env = [
	I1128 04:37:51.122492 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.122525 1326355 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 04:37:51.122680 1326355 command_runner.go:130] > # selinux = false
	I1128 04:37:51.122725 1326355 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 04:37:51.122772 1326355 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 04:37:51.122819 1326355 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 04:37:51.122865 1326355 command_runner.go:130] > # seccomp_profile = ""
	I1128 04:37:51.122910 1326355 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 04:37:51.123082 1326355 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 04:37:51.123404 1326355 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 04:37:51.123445 1326355 command_runner.go:130] > # which might increase security.
	I1128 04:37:51.123515 1326355 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1128 04:37:51.123564 1326355 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 04:37:51.123603 1326355 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 04:37:51.123645 1326355 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 04:37:51.123681 1326355 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 04:37:51.123707 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:37:51.123728 1326355 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 04:37:51.123767 1326355 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 04:37:51.123795 1326355 command_runner.go:130] > # the cgroup blockio controller.
	I1128 04:37:51.123840 1326355 command_runner.go:130] > # blockio_config_file = ""
	I1128 04:37:51.123882 1326355 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 04:37:51.123908 1326355 command_runner.go:130] > # irqbalance daemon.
	I1128 04:37:51.123941 1326355 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 04:37:51.124002 1326355 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 04:37:51.124048 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:37:51.124075 1326355 command_runner.go:130] > # rdt_config_file = ""
	I1128 04:37:51.124097 1326355 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 04:37:51.124138 1326355 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 04:37:51.124172 1326355 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 04:37:51.124201 1326355 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 04:37:51.124258 1326355 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 04:37:51.124301 1326355 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 04:37:51.124340 1326355 command_runner.go:130] > # will be added.
	I1128 04:37:51.124376 1326355 command_runner.go:130] > # default_capabilities = [
	I1128 04:37:51.124424 1326355 command_runner.go:130] > # 	"CHOWN",
	I1128 04:37:51.124462 1326355 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 04:37:51.124507 1326355 command_runner.go:130] > # 	"FSETID",
	I1128 04:37:51.124540 1326355 command_runner.go:130] > # 	"FOWNER",
	I1128 04:37:51.124567 1326355 command_runner.go:130] > # 	"SETGID",
	I1128 04:37:51.124649 1326355 command_runner.go:130] > # 	"SETUID",
	I1128 04:37:51.124719 1326355 command_runner.go:130] > # 	"SETPCAP",
	I1128 04:37:51.124732 1326355 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 04:37:51.124737 1326355 command_runner.go:130] > # 	"KILL",
	I1128 04:37:51.124741 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.124753 1326355 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1128 04:37:51.124762 1326355 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1128 04:37:51.124779 1326355 command_runner.go:130] > # add_inheritable_capabilities = true
	I1128 04:37:51.124789 1326355 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 04:37:51.124801 1326355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 04:37:51.124806 1326355 command_runner.go:130] > # default_sysctls = [
	I1128 04:37:51.124814 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.124820 1326355 command_runner.go:130] > # List of devices on the host that a
	I1128 04:37:51.124828 1326355 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 04:37:51.124834 1326355 command_runner.go:130] > # allowed_devices = [
	I1128 04:37:51.124842 1326355 command_runner.go:130] > # 	"/dev/fuse",
	I1128 04:37:51.124846 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.124852 1326355 command_runner.go:130] > # List of additional devices. specified as
	I1128 04:37:51.124870 1326355 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 04:37:51.124881 1326355 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 04:37:51.124889 1326355 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 04:37:51.124894 1326355 command_runner.go:130] > # additional_devices = [
	I1128 04:37:51.124901 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.124908 1326355 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 04:37:51.124918 1326355 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 04:37:51.124930 1326355 command_runner.go:130] > # 	"/etc/cdi",
	I1128 04:37:51.124943 1326355 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 04:37:51.124948 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.124966 1326355 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 04:37:51.124980 1326355 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 04:37:51.124990 1326355 command_runner.go:130] > # Defaults to false.
	I1128 04:37:51.125000 1326355 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 04:37:51.125013 1326355 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 04:37:51.125027 1326355 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 04:37:51.125040 1326355 command_runner.go:130] > # hooks_dir = [
	I1128 04:37:51.125049 1326355 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 04:37:51.125055 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.125072 1326355 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 04:37:51.125084 1326355 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 04:37:51.125090 1326355 command_runner.go:130] > # its default mounts from the following two files:
	I1128 04:37:51.125107 1326355 command_runner.go:130] > #
	I1128 04:37:51.125119 1326355 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 04:37:51.125311 1326355 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 04:37:51.125333 1326355 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 04:37:51.125339 1326355 command_runner.go:130] > #
	I1128 04:37:51.125351 1326355 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 04:37:51.125372 1326355 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 04:37:51.125384 1326355 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 04:37:51.125396 1326355 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 04:37:51.125404 1326355 command_runner.go:130] > #
	I1128 04:37:51.125410 1326355 command_runner.go:130] > # default_mounts_file = ""
	I1128 04:37:51.125421 1326355 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 04:37:51.125439 1326355 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 04:37:51.125454 1326355 command_runner.go:130] > # pids_limit = 0
	I1128 04:37:51.125464 1326355 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 04:37:51.125475 1326355 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 04:37:51.125489 1326355 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 04:37:51.125504 1326355 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 04:37:51.125513 1326355 command_runner.go:130] > # log_size_max = -1
	I1128 04:37:51.125629 1326355 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 04:37:51.125654 1326355 command_runner.go:130] > # log_to_journald = false
	I1128 04:37:51.125667 1326355 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 04:37:51.125677 1326355 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 04:37:51.125684 1326355 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 04:37:51.125691 1326355 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 04:37:51.125706 1326355 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 04:37:51.125722 1326355 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 04:37:51.125737 1326355 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 04:37:51.125743 1326355 command_runner.go:130] > # read_only = false
	I1128 04:37:51.125762 1326355 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 04:37:51.125781 1326355 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 04:37:51.125787 1326355 command_runner.go:130] > # live configuration reload.
	I1128 04:37:51.125795 1326355 command_runner.go:130] > # log_level = "info"
	I1128 04:37:51.125803 1326355 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 04:37:51.125810 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:37:51.125819 1326355 command_runner.go:130] > # log_filter = ""
	I1128 04:37:51.125831 1326355 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 04:37:51.125839 1326355 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 04:37:51.125854 1326355 command_runner.go:130] > # separated by comma.
	I1128 04:37:51.125862 1326355 command_runner.go:130] > # uid_mappings = ""
	I1128 04:37:51.125875 1326355 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 04:37:51.125895 1326355 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 04:37:51.125903 1326355 command_runner.go:130] > # separated by comma.
	I1128 04:37:51.125923 1326355 command_runner.go:130] > # gid_mappings = ""
	I1128 04:37:51.125937 1326355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 04:37:51.125947 1326355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 04:37:51.125958 1326355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 04:37:51.125973 1326355 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 04:37:51.125981 1326355 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 04:37:51.125992 1326355 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 04:37:51.125999 1326355 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 04:37:51.126005 1326355 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 04:37:51.126022 1326355 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 04:37:51.126032 1326355 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 04:37:51.126045 1326355 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 04:37:51.126056 1326355 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 04:37:51.126065 1326355 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 04:37:51.126084 1326355 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 04:37:51.126091 1326355 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 04:37:51.126102 1326355 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 04:37:51.126110 1326355 command_runner.go:130] > # drop_infra_ctr = true
	I1128 04:37:51.126125 1326355 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 04:37:51.126139 1326355 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 04:37:51.126149 1326355 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 04:37:51.126158 1326355 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 04:37:51.126166 1326355 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 04:37:51.126172 1326355 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 04:37:51.126182 1326355 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 04:37:51.126200 1326355 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 04:37:51.126209 1326355 command_runner.go:130] > # pinns_path = ""
	I1128 04:37:51.126228 1326355 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 04:37:51.126243 1326355 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 04:37:51.126261 1326355 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 04:37:51.126274 1326355 command_runner.go:130] > # default_runtime = "runc"
	I1128 04:37:51.126286 1326355 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 04:37:51.126314 1326355 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 04:37:51.126337 1326355 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 04:37:51.126346 1326355 command_runner.go:130] > # creation as a file is not desired either.
	I1128 04:37:51.126363 1326355 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 04:37:51.126386 1326355 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 04:37:51.126392 1326355 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 04:37:51.126397 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.126405 1326355 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 04:37:51.126413 1326355 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 04:37:51.126425 1326355 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 04:37:51.126435 1326355 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 04:37:51.126440 1326355 command_runner.go:130] > #
	I1128 04:37:51.126446 1326355 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 04:37:51.126456 1326355 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 04:37:51.126470 1326355 command_runner.go:130] > #  runtime_type = "oci"
	I1128 04:37:51.126477 1326355 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 04:37:51.126489 1326355 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 04:37:51.126496 1326355 command_runner.go:130] > #  allowed_annotations = []
	I1128 04:37:51.126501 1326355 command_runner.go:130] > # Where:
	I1128 04:37:51.126517 1326355 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 04:37:51.126526 1326355 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 04:37:51.126541 1326355 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 04:37:51.126553 1326355 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 04:37:51.126562 1326355 command_runner.go:130] > #   in $PATH.
	I1128 04:37:51.126574 1326355 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 04:37:51.126599 1326355 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 04:37:51.126611 1326355 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 04:37:51.126621 1326355 command_runner.go:130] > #   state.
	I1128 04:37:51.126639 1326355 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 04:37:51.126653 1326355 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 04:37:51.126674 1326355 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 04:37:51.126686 1326355 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 04:37:51.126706 1326355 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 04:37:51.126718 1326355 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 04:37:51.126724 1326355 command_runner.go:130] > #   The currently recognized values are:
	I1128 04:37:51.126732 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 04:37:51.126741 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 04:37:51.126763 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 04:37:51.126773 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 04:37:51.126807 1326355 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 04:37:51.126819 1326355 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 04:37:51.126834 1326355 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 04:37:51.126846 1326355 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 04:37:51.126861 1326355 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 04:37:51.126870 1326355 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 04:37:51.126904 1326355 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1128 04:37:51.126912 1326355 command_runner.go:130] > runtime_type = "oci"
	I1128 04:37:51.126918 1326355 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 04:37:51.126924 1326355 command_runner.go:130] > runtime_config_path = ""
	I1128 04:37:51.126931 1326355 command_runner.go:130] > monitor_path = ""
	I1128 04:37:51.126942 1326355 command_runner.go:130] > monitor_cgroup = ""
	I1128 04:37:51.126960 1326355 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 04:37:51.127000 1326355 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 04:37:51.127011 1326355 command_runner.go:130] > # running containers
	I1128 04:37:51.127017 1326355 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 04:37:51.127028 1326355 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 04:37:51.127048 1326355 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 04:37:51.127058 1326355 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 04:37:51.127065 1326355 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 04:37:51.127071 1326355 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 04:37:51.127081 1326355 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 04:37:51.127087 1326355 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 04:37:51.127094 1326355 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 04:37:51.127103 1326355 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 04:37:51.127111 1326355 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 04:37:51.127126 1326355 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 04:37:51.127138 1326355 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 04:37:51.127169 1326355 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 04:37:51.127194 1326355 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 04:37:51.127202 1326355 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 04:37:51.127216 1326355 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 04:37:51.127229 1326355 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 04:37:51.127242 1326355 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 04:37:51.127251 1326355 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 04:37:51.127257 1326355 command_runner.go:130] > # Example:
	I1128 04:37:51.127270 1326355 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 04:37:51.127276 1326355 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 04:37:51.127285 1326355 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 04:37:51.127292 1326355 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 04:37:51.127297 1326355 command_runner.go:130] > # cpuset = 0
	I1128 04:37:51.127308 1326355 command_runner.go:130] > # cpushares = "0-1"
	I1128 04:37:51.127316 1326355 command_runner.go:130] > # Where:
	I1128 04:37:51.127323 1326355 command_runner.go:130] > # The workload name is workload-type.
	I1128 04:37:51.127335 1326355 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 04:37:51.127353 1326355 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 04:37:51.127363 1326355 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 04:37:51.127377 1326355 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 04:37:51.127384 1326355 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 04:37:51.127389 1326355 command_runner.go:130] > # 
	I1128 04:37:51.127411 1326355 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 04:37:51.127418 1326355 command_runner.go:130] > #
	I1128 04:37:51.127426 1326355 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 04:37:51.127434 1326355 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 04:37:51.127453 1326355 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 04:37:51.127464 1326355 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 04:37:51.127475 1326355 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 04:37:51.127481 1326355 command_runner.go:130] > [crio.image]
	I1128 04:37:51.127488 1326355 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 04:37:51.127499 1326355 command_runner.go:130] > # default_transport = "docker://"
	I1128 04:37:51.127508 1326355 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 04:37:51.127523 1326355 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 04:37:51.127531 1326355 command_runner.go:130] > # global_auth_file = ""
	I1128 04:37:51.127541 1326355 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 04:37:51.127548 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:37:51.127562 1326355 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 04:37:51.127571 1326355 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 04:37:51.127582 1326355 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 04:37:51.127589 1326355 command_runner.go:130] > # This option supports live configuration reload.
	I1128 04:37:51.127595 1326355 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 04:37:51.127603 1326355 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 04:37:51.127614 1326355 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 04:37:51.127630 1326355 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 04:37:51.127645 1326355 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 04:37:51.127657 1326355 command_runner.go:130] > # pause_command = "/pause"
	I1128 04:37:51.127671 1326355 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 04:37:51.127691 1326355 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 04:37:51.127709 1326355 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 04:37:51.127720 1326355 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 04:37:51.127730 1326355 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 04:37:51.127740 1326355 command_runner.go:130] > # signature_policy = ""
	I1128 04:37:51.127758 1326355 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 04:37:51.127769 1326355 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 04:37:51.127781 1326355 command_runner.go:130] > # changing them here.
	I1128 04:37:51.127787 1326355 command_runner.go:130] > # insecure_registries = [
	I1128 04:37:51.127792 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.127810 1326355 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 04:37:51.127818 1326355 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 04:37:51.127827 1326355 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 04:37:51.127834 1326355 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 04:37:51.127840 1326355 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 04:37:51.127848 1326355 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 04:37:51.127866 1326355 command_runner.go:130] > # CNI plugins.
	I1128 04:37:51.127871 1326355 command_runner.go:130] > [crio.network]
	I1128 04:37:51.127891 1326355 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 04:37:51.127902 1326355 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 04:37:51.127913 1326355 command_runner.go:130] > # cni_default_network = ""
	I1128 04:37:51.127921 1326355 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 04:37:51.127927 1326355 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 04:37:51.127933 1326355 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 04:37:51.127939 1326355 command_runner.go:130] > # plugin_dirs = [
	I1128 04:37:51.127946 1326355 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 04:37:51.127959 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.127981 1326355 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 04:37:51.127990 1326355 command_runner.go:130] > [crio.metrics]
	I1128 04:37:51.128003 1326355 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 04:37:51.128011 1326355 command_runner.go:130] > # enable_metrics = false
	I1128 04:37:51.128017 1326355 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 04:37:51.128023 1326355 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 04:37:51.128031 1326355 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 04:37:51.128048 1326355 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 04:37:51.128066 1326355 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 04:37:51.128072 1326355 command_runner.go:130] > # metrics_collectors = [
	I1128 04:37:51.128085 1326355 command_runner.go:130] > # 	"operations",
	I1128 04:37:51.128092 1326355 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 04:37:51.128099 1326355 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 04:37:51.128110 1326355 command_runner.go:130] > # 	"operations_errors",
	I1128 04:37:51.128117 1326355 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 04:37:51.128129 1326355 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 04:37:51.128141 1326355 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 04:37:51.128147 1326355 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 04:37:51.128153 1326355 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 04:37:51.128166 1326355 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 04:37:51.128172 1326355 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 04:37:51.128183 1326355 command_runner.go:130] > # 	"containers_oom_total",
	I1128 04:37:51.128188 1326355 command_runner.go:130] > # 	"containers_oom",
	I1128 04:37:51.128193 1326355 command_runner.go:130] > # 	"processes_defunct",
	I1128 04:37:51.128199 1326355 command_runner.go:130] > # 	"operations_total",
	I1128 04:37:51.128204 1326355 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 04:37:51.128219 1326355 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 04:37:51.128228 1326355 command_runner.go:130] > # 	"operations_errors_total",
	I1128 04:37:51.128239 1326355 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 04:37:51.128245 1326355 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 04:37:51.128257 1326355 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 04:37:51.128262 1326355 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 04:37:51.128276 1326355 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 04:37:51.128282 1326355 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 04:37:51.128289 1326355 command_runner.go:130] > # ]
	I1128 04:37:51.128296 1326355 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 04:37:51.128306 1326355 command_runner.go:130] > # metrics_port = 9090
	I1128 04:37:51.128313 1326355 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 04:37:51.128318 1326355 command_runner.go:130] > # metrics_socket = ""
	I1128 04:37:51.128329 1326355 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 04:37:51.128337 1326355 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 04:37:51.128356 1326355 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 04:37:51.128363 1326355 command_runner.go:130] > # certificate on any modification event.
	I1128 04:37:51.128368 1326355 command_runner.go:130] > # metrics_cert = ""
	I1128 04:37:51.128386 1326355 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 04:37:51.128399 1326355 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 04:37:51.128406 1326355 command_runner.go:130] > # metrics_key = ""
	I1128 04:37:51.128421 1326355 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 04:37:51.128506 1326355 command_runner.go:130] > [crio.tracing]
	I1128 04:37:51.128529 1326355 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 04:37:51.128536 1326355 command_runner.go:130] > # enable_tracing = false
	I1128 04:37:51.128545 1326355 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 04:37:51.128555 1326355 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 04:37:51.128571 1326355 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 04:37:51.128578 1326355 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 04:37:51.128594 1326355 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 04:37:51.128616 1326355 command_runner.go:130] > [crio.stats]
	I1128 04:37:51.128626 1326355 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 04:37:51.128637 1326355 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 04:37:51.128643 1326355 command_runner.go:130] > # stats_collection_period = 0
	I1128 04:37:51.128730 1326355 command_runner.go:130] ! time="2023-11-28 04:37:51.115188354Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1128 04:37:51.128761 1326355 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 04:37:51.128866 1326355 cni.go:84] Creating CNI manager for ""
	I1128 04:37:51.128883 1326355 cni.go:136] 2 nodes found, recommending kindnet
	I1128 04:37:51.128899 1326355 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:37:51.129223 1326355 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-448128 NodeName:multinode-448128-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:37:51.129451 1326355 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-448128-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:37:51.129533 1326355 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-448128-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:37:51.129636 1326355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:37:51.141170 1326355 command_runner.go:130] > kubeadm
	I1128 04:37:51.141210 1326355 command_runner.go:130] > kubectl
	I1128 04:37:51.141216 1326355 command_runner.go:130] > kubelet
	I1128 04:37:51.142964 1326355 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:37:51.143099 1326355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1128 04:37:51.155293 1326355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1128 04:37:51.178766 1326355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:37:51.202651 1326355 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1128 04:37:51.207399 1326355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:37:51.220952 1326355 host.go:66] Checking if "multinode-448128" exists ...
	I1128 04:37:51.221253 1326355 start.go:304] JoinCluster: &{Name:multinode-448128 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-448128 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:37:51.221351 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1128 04:37:51.221407 1326355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:37:51.221829 1326355 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:37:51.243941 1326355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:37:51.414494 1326355 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token gswenw.egz8qxnrxh1m7bjt --discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c 
	I1128 04:37:51.414535 1326355 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 04:37:51.414562 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gswenw.egz8qxnrxh1m7bjt --discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-448128-m02"
	I1128 04:37:51.461157 1326355 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 04:37:51.508078 1326355 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1128 04:37:51.508103 1326355 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1050-aws
	I1128 04:37:51.508110 1326355 command_runner.go:130] > OS: Linux
	I1128 04:37:51.508119 1326355 command_runner.go:130] > CGROUPS_CPU: enabled
	I1128 04:37:51.508126 1326355 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1128 04:37:51.508132 1326355 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1128 04:37:51.508139 1326355 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1128 04:37:51.508146 1326355 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1128 04:37:51.508153 1326355 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1128 04:37:51.508160 1326355 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1128 04:37:51.508167 1326355 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1128 04:37:51.508178 1326355 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1128 04:37:51.624628 1326355 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1128 04:37:51.624653 1326355 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1128 04:37:51.661908 1326355 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:37:51.662141 1326355 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:37:51.662159 1326355 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 04:37:51.771425 1326355 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1128 04:37:54.288757 1326355 command_runner.go:130] > This node has joined the cluster:
	I1128 04:37:54.288822 1326355 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1128 04:37:54.288846 1326355 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1128 04:37:54.288863 1326355 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1128 04:37:54.292382 1326355 command_runner.go:130] ! W1128 04:37:51.460439    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1128 04:37:54.292416 1326355 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1128 04:37:54.292433 1326355 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:37:54.292451 1326355 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gswenw.egz8qxnrxh1m7bjt --discovery-token-ca-cert-hash sha256:2b82e38d2d31e35b1ca1e5bf9ca1a9b4352ba216aa6a171488e9bb15f42a5d8c --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-448128-m02": (2.877875742s)
	I1128 04:37:54.292469 1326355 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1128 04:37:54.530360 1326355 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1128 04:37:54.530395 1326355 start.go:306] JoinCluster complete in 3.309142109s
	I1128 04:37:54.530408 1326355 cni.go:84] Creating CNI manager for ""
	I1128 04:37:54.530415 1326355 cni.go:136] 2 nodes found, recommending kindnet
	I1128 04:37:54.530469 1326355 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:37:54.534890 1326355 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 04:37:54.534916 1326355 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1128 04:37:54.534927 1326355 command_runner.go:130] > Device: 3ah/58d	Inode: 5452979     Links: 1
	I1128 04:37:54.534935 1326355 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 04:37:54.534946 1326355 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1128 04:37:54.534953 1326355 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1128 04:37:54.534959 1326355 command_runner.go:130] > Change: 2023-11-28 04:13:24.893843648 +0000
	I1128 04:37:54.534967 1326355 command_runner.go:130] >  Birth: 2023-11-28 04:13:24.849843889 +0000
	I1128 04:37:54.535019 1326355 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 04:37:54.535032 1326355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:37:54.557797 1326355 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:37:54.914133 1326355 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 04:37:54.921073 1326355 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 04:37:54.926788 1326355 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 04:37:54.942232 1326355 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 04:37:54.948798 1326355 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:37:54.949097 1326355 kapi.go:59] client config for multinode-448128: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:37:54.949437 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 04:37:54.949446 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:54.949455 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:54.949462 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:54.952009 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:54.952031 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:54.952040 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:54.952047 1326355 round_trippers.go:580]     Content-Length: 291
	I1128 04:37:54.952056 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:54 GMT
	I1128 04:37:54.952073 1326355 round_trippers.go:580]     Audit-Id: 04807328-cc77-46f1-99db-0a8e233fdbba
	I1128 04:37:54.952089 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:54.952095 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:54.952106 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:54.952252 1326355 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0734a789-b557-4140-8c57-08339bccd505","resourceVersion":"409","creationTimestamp":"2023-11-28T04:36:52Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 04:37:54.952350 1326355 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-448128" context rescaled to 1 replicas
	I1128 04:37:54.952378 1326355 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 04:37:54.954753 1326355 out.go:177] * Verifying Kubernetes components...
	I1128 04:37:54.956944 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:37:54.998405 1326355 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:37:54.998681 1326355 kapi.go:59] client config for multinode-448128: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/multinode-448128/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:37:54.998965 1326355 node_ready.go:35] waiting up to 6m0s for node "multinode-448128-m02" to be "Ready" ...
	I1128 04:37:54.999036 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:54.999046 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:54.999056 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:54.999066 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:55.002704 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:55.002756 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:55.002765 1326355 round_trippers.go:580]     Audit-Id: e8d39552-ff89-4c97-957a-1e308ed4303f
	I1128 04:37:55.002772 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:55.002778 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:55.002784 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:55.002791 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:55.002797 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:55 GMT
	I1128 04:37:55.003588 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"449","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5594 chars]
	I1128 04:37:55.004155 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:55.004200 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:55.004215 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:55.004224 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:55.011022 1326355 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1128 04:37:55.011046 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:55.011055 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:55.011062 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:55 GMT
	I1128 04:37:55.011068 1326355 round_trippers.go:580]     Audit-Id: 038175c8-ee01-41a3-a328-ad3a64d52ff9
	I1128 04:37:55.011075 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:55.011082 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:55.011088 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:55.011926 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"449","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 5594 chars]
	I1128 04:37:55.513219 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:55.513247 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:55.513257 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:55.513264 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:55.515947 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:55.515976 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:55.515985 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:55.515992 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:55 GMT
	I1128 04:37:55.516001 1326355 round_trippers.go:580]     Audit-Id: f19673db-ef41-4f82-bcaf-271772d11766
	I1128 04:37:55.516007 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:55.516013 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:55.516019 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:55.516176 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:56.013639 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:56.013666 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:56.013676 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:56.013684 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:56.016260 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:56.016285 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:56.016295 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:56.016301 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:56.016308 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:56 GMT
	I1128 04:37:56.016314 1326355 round_trippers.go:580]     Audit-Id: dc4a2f1c-0b30-4725-bdf6-a6e680d5f693
	I1128 04:37:56.016321 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:56.016327 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:56.016449 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:56.513452 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:56.513480 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:56.513489 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:56.513496 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:56.516030 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:56.516051 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:56.516060 1326355 round_trippers.go:580]     Audit-Id: cfd2d4b3-31fd-442e-8680-5be8a60aeece
	I1128 04:37:56.516069 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:56.516075 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:56.516081 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:56.516087 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:56.516093 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:56 GMT
	I1128 04:37:56.516300 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:57.012712 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:57.012739 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:57.012749 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:57.012756 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:57.015294 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:57.015320 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:57.015330 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:57 GMT
	I1128 04:37:57.015338 1326355 round_trippers.go:580]     Audit-Id: 32588742-02a5-4f68-b306-aa17fdeb59d5
	I1128 04:37:57.015344 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:57.015351 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:57.015357 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:57.015368 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:57.015737 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:57.016136 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:37:57.513430 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:57.513459 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:57.513477 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:57.513484 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:57.516541 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:57.516566 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:57.516575 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:57.516583 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:57 GMT
	I1128 04:37:57.516591 1326355 round_trippers.go:580]     Audit-Id: 5ee01195-90de-4a9b-b238-cfcfbca4e277
	I1128 04:37:57.516597 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:57.516603 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:57.516615 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:57.516838 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:58.013309 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:58.013339 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:58.013349 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:58.013356 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:58.017094 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:37:58.017124 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:58.017134 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:58.017141 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:58.017148 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:58 GMT
	I1128 04:37:58.017154 1326355 round_trippers.go:580]     Audit-Id: 439bdba4-95f7-4e6a-8cf1-2223852d6ebe
	I1128 04:37:58.017160 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:58.017167 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:58.017292 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:58.513501 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:58.513528 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:58.513537 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:58.513544 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:58.516535 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:58.516559 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:58.516568 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:58.516575 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:58.516596 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:58 GMT
	I1128 04:37:58.516614 1326355 round_trippers.go:580]     Audit-Id: 8b6e500c-4bfc-4660-8f34-28eca819f14a
	I1128 04:37:58.516621 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:58.516629 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:58.516759 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:59.013271 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:59.013302 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:59.013312 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:59.013319 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:59.015932 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:59.015954 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:59.015964 1326355 round_trippers.go:580]     Audit-Id: 5eef426c-8feb-48e2-b9a3-f027ccc2de1c
	I1128 04:37:59.015971 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:59.015977 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:59.015984 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:59.015990 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:59.015999 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:59 GMT
	I1128 04:37:59.016177 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:37:59.016570 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:37:59.513374 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:37:59.513400 1326355 round_trippers.go:469] Request Headers:
	I1128 04:37:59.513410 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:37:59.513418 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:37:59.515937 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:37:59.515964 1326355 round_trippers.go:577] Response Headers:
	I1128 04:37:59.515973 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:37:59.515980 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:37:59 GMT
	I1128 04:37:59.515987 1326355 round_trippers.go:580]     Audit-Id: e9cccdf3-27f3-469a-ad9a-75aa5257123d
	I1128 04:37:59.515993 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:37:59.515999 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:37:59.516007 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:37:59.516146 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:00.019228 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:00.019252 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:00.019261 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:00.019268 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:00.023074 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:38:00.023105 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:00.023115 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:00 GMT
	I1128 04:38:00.023123 1326355 round_trippers.go:580]     Audit-Id: 355430c9-ea6c-4316-84f1-d6f3706d5c2a
	I1128 04:38:00.023129 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:00.023135 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:00.023141 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:00.023147 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:00.023648 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:00.513499 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:00.513525 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:00.513534 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:00.513543 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:00.516461 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:00.516483 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:00.516493 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:00.516500 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:00 GMT
	I1128 04:38:00.516506 1326355 round_trippers.go:580]     Audit-Id: 2cbe340c-31b7-40c9-8bfe-e3eb9b6c44b3
	I1128 04:38:00.516512 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:00.516519 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:00.516525 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:00.517196 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:01.012810 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:01.012839 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:01.012850 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:01.012858 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:01.015391 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:01.015416 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:01.015425 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:01.015432 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:01.015439 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:01.015445 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:01.015452 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:01 GMT
	I1128 04:38:01.015464 1326355 round_trippers.go:580]     Audit-Id: 99becc80-42fc-4a2e-b90c-7a55f5008688
	I1128 04:38:01.015758 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:01.513464 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:01.513496 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:01.513506 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:01.513513 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:01.516103 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:01.516132 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:01.516142 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:01.516151 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:01.516157 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:01.516164 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:01 GMT
	I1128 04:38:01.516170 1326355 round_trippers.go:580]     Audit-Id: a2ff3aa5-b691-475f-a3b6-5e7909ba071f
	I1128 04:38:01.516176 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:01.516297 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:01.516715 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:02.013392 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:02.013420 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:02.013430 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:02.013438 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:02.015987 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:02.016019 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:02.016028 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:02.016035 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:02 GMT
	I1128 04:38:02.016041 1326355 round_trippers.go:580]     Audit-Id: 73a5d68a-6398-470c-b21e-75fc0e01dbc5
	I1128 04:38:02.016047 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:02.016053 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:02.016060 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:02.016217 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:02.512706 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:02.512732 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:02.512749 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:02.512756 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:02.515398 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:02.515420 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:02.515429 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:02.515436 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:02.515442 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:02.515448 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:02.515454 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:02 GMT
	I1128 04:38:02.515460 1326355 round_trippers.go:580]     Audit-Id: 8516ca33-a1b2-4e45-99d9-b92666170dcd
	I1128 04:38:02.515576 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:03.012778 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:03.012809 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:03.012819 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:03.012826 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:03.015676 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:03.015703 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:03.015712 1326355 round_trippers.go:580]     Audit-Id: 43eee51a-ce3d-419d-bdb2-bc970f09cd40
	I1128 04:38:03.015720 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:03.015726 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:03.015732 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:03.015739 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:03.015750 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:03 GMT
	I1128 04:38:03.015896 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:03.512853 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:03.512881 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:03.512891 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:03.512898 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:03.515420 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:03.515442 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:03.515451 1326355 round_trippers.go:580]     Audit-Id: 6d6f6f01-9098-4d94-905b-85c0a093d126
	I1128 04:38:03.515457 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:03.515464 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:03.515470 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:03.515476 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:03.515482 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:03 GMT
	I1128 04:38:03.515670 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:04.013380 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:04.013407 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:04.013418 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:04.013425 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:04.016058 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:04.016088 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:04.016097 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:04.016104 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:04.016110 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:04.016117 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:04 GMT
	I1128 04:38:04.016123 1326355 round_trippers.go:580]     Audit-Id: bd4afe68-65ba-4a04-ab56-c194a42e56c1
	I1128 04:38:04.016129 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:04.016249 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"460","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5703 chars]
	I1128 04:38:04.016633 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:04.513412 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:04.513438 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:04.513448 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:04.513456 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:04.516387 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:04.516413 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:04.516422 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:04 GMT
	I1128 04:38:04.516429 1326355 round_trippers.go:580]     Audit-Id: f387fb6f-ce91-484f-8c6c-1bca627edf13
	I1128 04:38:04.516435 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:04.516442 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:04.516448 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:04.516455 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:04.516608 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:05.012885 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:05.012918 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:05.012928 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:05.012935 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:05.015549 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:05.015573 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:05.015582 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:05.015589 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:05.015595 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:05.015601 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:05 GMT
	I1128 04:38:05.015608 1326355 round_trippers.go:580]     Audit-Id: 778be7d8-1d9e-4532-8777-929f0bfbc46a
	I1128 04:38:05.015614 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:05.015881 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:05.513015 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:05.513040 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:05.513050 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:05.513057 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:05.515546 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:05.515568 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:05.515577 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:05 GMT
	I1128 04:38:05.515586 1326355 round_trippers.go:580]     Audit-Id: 57d8a30c-fc57-4b9f-bbc5-bacac3033ec5
	I1128 04:38:05.515592 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:05.515598 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:05.515605 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:05.515611 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:05.515741 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:06.015274 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:06.015305 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:06.015316 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:06.015324 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:06.018001 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:06.018030 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:06.018040 1326355 round_trippers.go:580]     Audit-Id: e5d56481-5f39-439e-ad15-9cb47e45afd1
	I1128 04:38:06.018047 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:06.018053 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:06.018060 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:06.018066 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:06.018078 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:06 GMT
	I1128 04:38:06.018202 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:06.018587 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:06.513461 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:06.513487 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:06.513497 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:06.513504 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:06.516343 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:06.516369 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:06.516379 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:06.516385 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:06.516392 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:06.516399 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:06.516406 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:06 GMT
	I1128 04:38:06.516412 1326355 round_trippers.go:580]     Audit-Id: ac5c4f58-65bf-4c28-9623-3ca09a587e87
	I1128 04:38:06.516789 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:07.025639 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:07.025667 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:07.025677 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:07.025687 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:07.028475 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:07.028503 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:07.028512 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:07.028519 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:07 GMT
	I1128 04:38:07.028526 1326355 round_trippers.go:580]     Audit-Id: e6a00eb4-9493-4c3b-8c65-754a92235d9c
	I1128 04:38:07.028532 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:07.028538 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:07.028544 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:07.028681 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:07.513496 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:07.513540 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:07.513549 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:07.513556 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:07.517031 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:38:07.517056 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:07.517065 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:07.517071 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:07.517078 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:07.517084 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:07.517091 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:07 GMT
	I1128 04:38:07.517097 1326355 round_trippers.go:580]     Audit-Id: dae1630a-b109-497b-b24e-fa9ce76fe51a
	I1128 04:38:07.517214 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:08.013323 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:08.013352 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:08.013362 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:08.013370 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:08.015968 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:08.015997 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:08.016007 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:08.016013 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:08.016020 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:08.016031 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:08 GMT
	I1128 04:38:08.016037 1326355 round_trippers.go:580]     Audit-Id: 10945f84-6795-4edf-bcd1-0de9291918be
	I1128 04:38:08.016043 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:08.016151 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:08.513060 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:08.513099 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:08.513111 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:08.513127 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:08.515644 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:08.515675 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:08.515685 1326355 round_trippers.go:580]     Audit-Id: e11730c9-0c6a-411f-94ae-1130838fae9a
	I1128 04:38:08.515691 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:08.515697 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:08.515703 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:08.515710 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:08.515720 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:08 GMT
	I1128 04:38:08.515858 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:08.516230 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:09.012726 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:09.012753 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:09.012764 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:09.012772 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:09.015501 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:09.015527 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:09.015537 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:09 GMT
	I1128 04:38:09.015543 1326355 round_trippers.go:580]     Audit-Id: 8c5b691c-5a7e-445a-9ee7-319374357fa6
	I1128 04:38:09.015549 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:09.015556 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:09.015562 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:09.015569 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:09.015675 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:09.513583 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:09.513613 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:09.513623 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:09.513643 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:09.516184 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:09.516212 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:09.516222 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:09.516229 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:09.516235 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:09 GMT
	I1128 04:38:09.516242 1326355 round_trippers.go:580]     Audit-Id: 3190fcfc-ab38-4c5e-8e80-b70fbcf5cf2c
	I1128 04:38:09.516248 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:09.516254 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:09.516389 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:10.013519 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:10.013548 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:10.013567 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:10.013574 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:10.016735 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:38:10.016765 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:10.016775 1326355 round_trippers.go:580]     Audit-Id: 5f100bca-eaa2-4a81-987c-56bfcfab95fc
	I1128 04:38:10.016782 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:10.016788 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:10.016794 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:10.016800 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:10.016809 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:10 GMT
	I1128 04:38:10.016947 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:10.513052 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:10.513077 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:10.513086 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:10.513094 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:10.515583 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:10.515608 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:10.515617 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:10.515630 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:10.515637 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:10 GMT
	I1128 04:38:10.515643 1326355 round_trippers.go:580]     Audit-Id: 2acbbddc-d11c-4e5e-8d15-e9d7114b48e1
	I1128 04:38:10.515649 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:10.515655 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:10.515886 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:10.516284 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:11.012973 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:11.013000 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:11.013011 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:11.013018 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:11.015562 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:11.015587 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:11.015596 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:11.015603 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:11.015610 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:11.015616 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:11 GMT
	I1128 04:38:11.015623 1326355 round_trippers.go:580]     Audit-Id: 7bb9b0fc-466f-471f-8e32-d5611a187282
	I1128 04:38:11.015634 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:11.015794 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:11.512869 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:11.512897 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:11.512907 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:11.512915 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:11.515734 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:11.515761 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:11.515771 1326355 round_trippers.go:580]     Audit-Id: 2d25fbb6-a756-414f-b17a-3a877a4b4f42
	I1128 04:38:11.515778 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:11.515788 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:11.515798 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:11.515810 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:11.515832 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:11 GMT
	I1128 04:38:11.515982 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:12.013259 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:12.013286 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:12.013296 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:12.013303 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:12.015957 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:12.015986 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:12.015995 1326355 round_trippers.go:580]     Audit-Id: a460417e-c1a0-4a38-8d34-96885914f5a0
	I1128 04:38:12.016006 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:12.016013 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:12.016019 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:12.016030 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:12.016038 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:12 GMT
	I1128 04:38:12.016191 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:12.512726 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:12.512753 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:12.512762 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:12.512770 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:12.515349 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:12.515376 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:12.515385 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:12.515392 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:12.515405 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:12 GMT
	I1128 04:38:12.515412 1326355 round_trippers.go:580]     Audit-Id: 39392be6-ff8f-4ff8-b47a-dc6e16a39696
	I1128 04:38:12.515418 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:12.515425 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:12.515567 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:13.013377 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:13.013415 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:13.013430 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:13.013459 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:13.016295 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:13.016318 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:13.016327 1326355 round_trippers.go:580]     Audit-Id: ec701069-fa76-4fa0-9ec6-c9d39a897901
	I1128 04:38:13.016334 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:13.016340 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:13.016346 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:13.016352 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:13.016358 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:13 GMT
	I1128 04:38:13.016484 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:13.016895 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:13.512689 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:13.512715 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:13.512724 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:13.512732 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:13.515207 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:13.515236 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:13.515246 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:13.515253 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:13.515260 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:13.515267 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:13.515273 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:13 GMT
	I1128 04:38:13.515280 1326355 round_trippers.go:580]     Audit-Id: 28e6fb9f-69f2-4146-9792-faecb8ef9966
	I1128 04:38:13.515387 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:14.012866 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:14.012893 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:14.012904 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:14.012911 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:14.015584 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:14.015607 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:14.015616 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:14.015623 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:14 GMT
	I1128 04:38:14.015629 1326355 round_trippers.go:580]     Audit-Id: 5d7916eb-917c-47e6-bbfb-194c14148a98
	I1128 04:38:14.015636 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:14.015642 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:14.015648 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:14.015770 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:14.512702 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:14.512726 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:14.512736 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:14.512743 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:14.515261 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:14.515301 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:14.515310 1326355 round_trippers.go:580]     Audit-Id: 7a80d215-875a-4697-8656-ce41cc0c2ce8
	I1128 04:38:14.515317 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:14.515323 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:14.515332 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:14.515338 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:14.515350 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:14 GMT
	I1128 04:38:14.515505 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:15.012899 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:15.012932 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:15.012942 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:15.012950 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:15.022152 1326355 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1128 04:38:15.022180 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:15.022190 1326355 round_trippers.go:580]     Audit-Id: d49aefbf-d1f8-4359-aaaa-e395ea123459
	I1128 04:38:15.022197 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:15.022203 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:15.022210 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:15.022216 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:15.022223 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:15 GMT
	I1128 04:38:15.022329 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:15.022744 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:15.512740 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:15.512765 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:15.512774 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:15.512782 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:15.515455 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:15.515485 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:15.515494 1326355 round_trippers.go:580]     Audit-Id: c03e5383-e19a-4bc3-8d5e-4a2b183be584
	I1128 04:38:15.515501 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:15.515507 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:15.515513 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:15.515520 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:15.515526 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:15 GMT
	I1128 04:38:15.515652 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:16.012709 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:16.012736 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:16.012757 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:16.012765 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:16.015418 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:16.015445 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:16.015455 1326355 round_trippers.go:580]     Audit-Id: bc914b3d-e52c-4004-af38-5c2d6664ce48
	I1128 04:38:16.015464 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:16.015470 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:16.015476 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:16.015483 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:16.015489 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:16 GMT
	I1128 04:38:16.015589 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:16.512712 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:16.512738 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:16.512748 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:16.512755 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:16.515375 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:16.515403 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:16.515413 1326355 round_trippers.go:580]     Audit-Id: 9245f98a-feaa-4794-ab77-de74d9522135
	I1128 04:38:16.515420 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:16.515426 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:16.515432 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:16.515439 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:16.515446 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:16 GMT
	I1128 04:38:16.515569 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:17.012615 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:17.012652 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:17.012678 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:17.012686 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:17.015348 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:17.015374 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:17.015383 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:17.015389 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:17.015396 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:17.015402 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:17 GMT
	I1128 04:38:17.015408 1326355 round_trippers.go:580]     Audit-Id: 0be4eb6c-3c76-47d2-b630-75c24582eb3e
	I1128 04:38:17.015414 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:17.015521 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:17.513372 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:17.513401 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:17.513410 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:17.513417 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:17.516056 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:17.516161 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:17.516179 1326355 round_trippers.go:580]     Audit-Id: 24688ae6-99fa-4471-99b8-742f797644ca
	I1128 04:38:17.516187 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:17.516206 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:17.516216 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:17.516223 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:17.516229 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:17 GMT
	I1128 04:38:17.516359 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:17.516815 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:18.013148 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:18.013176 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:18.013186 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:18.013193 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:18.015987 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:18.016012 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:18.016022 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:18.016029 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:18.016035 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:18.016041 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:18.016047 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:18 GMT
	I1128 04:38:18.016054 1326355 round_trippers.go:580]     Audit-Id: 45f4d87b-25d4-458a-8a38-f80c34b6c208
	I1128 04:38:18.016231 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:18.513458 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:18.513486 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:18.513504 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:18.513512 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:18.516172 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:18.516197 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:18.516206 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:18.516214 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:18 GMT
	I1128 04:38:18.516220 1326355 round_trippers.go:580]     Audit-Id: 26114a03-1256-455c-a0cb-e122ea5af643
	I1128 04:38:18.516226 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:18.516233 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:18.516238 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:18.516474 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:19.013611 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:19.013635 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:19.013645 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:19.013652 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:19.016213 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:19.016235 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:19.016244 1326355 round_trippers.go:580]     Audit-Id: a86adb03-8b5b-43ba-8513-1123b2d5ada3
	I1128 04:38:19.016251 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:19.016257 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:19.016263 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:19.016269 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:19.016276 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:19 GMT
	I1128 04:38:19.016396 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:19.513470 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:19.513493 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:19.513503 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:19.513510 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:19.516051 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:19.516073 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:19.516082 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:19.516089 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:19 GMT
	I1128 04:38:19.516098 1326355 round_trippers.go:580]     Audit-Id: ea066a74-689a-4067-8d86-bef1dfd138b9
	I1128 04:38:19.516104 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:19.516110 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:19.516117 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:19.516209 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:20.013634 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:20.013668 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:20.013678 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:20.013685 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:20.018120 1326355 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 04:38:20.018146 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:20.018156 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:20.018163 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:20.018170 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:20 GMT
	I1128 04:38:20.018176 1326355 round_trippers.go:580]     Audit-Id: 83138074-275c-472b-8a7e-cbec6e49a6bf
	I1128 04:38:20.018182 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:20.018188 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:20.018361 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:20.018751 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:20.513270 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:20.513302 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:20.513313 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:20.513320 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:20.515880 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:20.515902 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:20.515911 1326355 round_trippers.go:580]     Audit-Id: 8e16a540-ecf0-478c-8194-16cb9180a810
	I1128 04:38:20.515917 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:20.515923 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:20.515930 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:20.515936 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:20.515943 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:20 GMT
	I1128 04:38:20.516270 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:21.013482 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:21.013512 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:21.013523 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:21.013530 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:21.016050 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:21.016071 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:21.016081 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:21.016088 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:21 GMT
	I1128 04:38:21.016094 1326355 round_trippers.go:580]     Audit-Id: 71c16611-3842-4ade-ba00-f5cfcebb5a80
	I1128 04:38:21.016101 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:21.016107 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:21.016113 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:21.016224 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:21.513331 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:21.513359 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:21.513369 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:21.513376 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:21.515882 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:21.515906 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:21.515915 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:21.515922 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:21.515928 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:21 GMT
	I1128 04:38:21.515935 1326355 round_trippers.go:580]     Audit-Id: 58400a26-e9ae-4dd0-82c8-d32e9f46f85c
	I1128 04:38:21.515941 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:21.515947 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:21.516209 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:22.013372 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:22.013399 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:22.013408 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:22.013416 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:22.016102 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:22.016127 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:22.016136 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:22.016143 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:22.016149 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:22.016155 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:22.016162 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:22 GMT
	I1128 04:38:22.016169 1326355 round_trippers.go:580]     Audit-Id: 2e38bb94-37ee-4683-a819-62cf057ad6c1
	I1128 04:38:22.016284 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:22.512851 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:22.512881 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:22.512892 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:22.512899 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:22.515421 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:22.515442 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:22.515451 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:22.515458 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:22.515464 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:22.515471 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:22.515478 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:22 GMT
	I1128 04:38:22.515483 1326355 round_trippers.go:580]     Audit-Id: 87831e62-a5bb-4d28-b64e-fb157425c718
	I1128 04:38:22.515593 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:22.515983 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:23.013004 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:23.013028 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:23.013038 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:23.013046 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:23.015671 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:23.015692 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:23.015701 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:23 GMT
	I1128 04:38:23.015709 1326355 round_trippers.go:580]     Audit-Id: 13f5a7c5-808d-4642-bc46-984881c26fa7
	I1128 04:38:23.015715 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:23.015721 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:23.015727 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:23.015733 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:23.015912 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:23.512857 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:23.512884 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:23.512894 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:23.512901 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:23.515606 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:23.515629 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:23.515639 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:23 GMT
	I1128 04:38:23.515645 1326355 round_trippers.go:580]     Audit-Id: 70c38bf6-708e-4494-adec-b5d053e0d29c
	I1128 04:38:23.515651 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:23.515657 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:23.515663 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:23.515669 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:23.515832 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:24.012773 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:24.012803 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:24.012813 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:24.012820 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:24.015620 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:24.015653 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:24.015662 1326355 round_trippers.go:580]     Audit-Id: ca6ba2d7-7535-4d57-aa50-8735b7b37c0c
	I1128 04:38:24.015669 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:24.015677 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:24.015683 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:24.015689 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:24.015696 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:24 GMT
	I1128 04:38:24.015807 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:24.512884 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:24.512914 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:24.512924 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:24.512932 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:24.516075 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:38:24.516101 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:24.516110 1326355 round_trippers.go:580]     Audit-Id: 2a06919c-8a33-4c8a-943a-8384c2c3508a
	I1128 04:38:24.516117 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:24.516123 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:24.516129 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:24.516135 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:24.516142 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:24 GMT
	I1128 04:38:24.516287 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:24.516685 1326355 node_ready.go:58] node "multinode-448128-m02" has status "Ready":"False"
	I1128 04:38:25.013211 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:25.013237 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:25.013248 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:25.013255 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:25.016295 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:38:25.016320 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:25.016331 1326355 round_trippers.go:580]     Audit-Id: 7ce0fc9d-6aca-44bc-840a-45959eee90c2
	I1128 04:38:25.016338 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:25.016345 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:25.016351 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:25.016357 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:25.016365 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:25 GMT
	I1128 04:38:25.016517 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:25.512717 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:25.512742 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:25.512752 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:25.512759 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:25.515232 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:25.515254 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:25.515263 1326355 round_trippers.go:580]     Audit-Id: e46ad76e-b005-41e1-b8c8-f6da96f151f1
	I1128 04:38:25.515271 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:25.515277 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:25.515283 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:25.515289 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:25.515296 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:25 GMT
	I1128 04:38:25.515473 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"473","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1128 04:38:26.013256 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:26.013287 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.013298 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.013305 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.018638 1326355 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 04:38:26.018663 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.018672 1326355 round_trippers.go:580]     Audit-Id: 5114b712-7726-4206-822a-76dad87e40d9
	I1128 04:38:26.018679 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.018685 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.018691 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.018697 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.018703 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.018810 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"495","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I1128 04:38:26.019197 1326355 node_ready.go:49] node "multinode-448128-m02" has status "Ready":"True"
	I1128 04:38:26.019210 1326355 node_ready.go:38] duration metric: took 31.02022716s waiting for node "multinode-448128-m02" to be "Ready" ...
	I1128 04:38:26.019221 1326355 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:38:26.019285 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1128 04:38:26.019291 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.019299 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.019306 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.023448 1326355 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 04:38:26.023479 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.023488 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.023494 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.023501 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.023507 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.023514 1326355 round_trippers.go:580]     Audit-Id: de67cf3f-648f-475d-81f4-dd8904478ee8
	I1128 04:38:26.023520 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.024001 1326355 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"404","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1128 04:38:26.026909 1326355 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-h99h4" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.027011 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-h99h4
	I1128 04:38:26.027023 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.027033 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.027040 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.029656 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.029694 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.029704 1326355 round_trippers.go:580]     Audit-Id: 3109da5b-218f-4789-9065-8998e452f1fb
	I1128 04:38:26.029710 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.029717 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.029724 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.029735 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.029742 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.030150 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-h99h4","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"770a2e4e-e096-47e0-81a9-0623bbaa4825","resourceVersion":"404","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"48b21139-8ceb-4dbf-900a-4f0d78599911","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48b21139-8ceb-4dbf-900a-4f0d78599911\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1128 04:38:26.030729 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:26.030750 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.030759 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.030766 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.033237 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.033264 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.033273 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.033280 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.033287 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.033293 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.033307 1326355 round_trippers.go:580]     Audit-Id: 4f7577d4-1bad-43b6-9296-e27bb26f5c14
	I1128 04:38:26.033319 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.033668 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:38:26.034073 1326355 pod_ready.go:92] pod "coredns-5dd5756b68-h99h4" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:26.034085 1326355 pod_ready.go:81] duration metric: took 7.136997ms waiting for pod "coredns-5dd5756b68-h99h4" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.034096 1326355 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.034202 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-448128
	I1128 04:38:26.034207 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.034215 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.034222 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.036703 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.036729 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.036739 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.036746 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.036763 1326355 round_trippers.go:580]     Audit-Id: d76be910-e8fd-4a29-b0af-3d08030c325a
	I1128 04:38:26.036770 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.036781 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.036789 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.036957 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-448128","namespace":"kube-system","uid":"121c97bc-fd53-4694-a919-1df709813895","resourceVersion":"260","creationTimestamp":"2023-11-28T04:36:52Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"202ec28270a39c70a9db6e2ad9deefcf","kubernetes.io/config.mirror":"202ec28270a39c70a9db6e2ad9deefcf","kubernetes.io/config.seen":"2023-11-28T04:36:52.405621949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1128 04:38:26.037447 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:26.037457 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.037470 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.037478 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.039916 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.039942 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.039951 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.039957 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.039964 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.039993 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.040009 1326355 round_trippers.go:580]     Audit-Id: 5933cde0-b3f2-4993-8368-2f47df71be28
	I1128 04:38:26.040017 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.040165 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:38:26.040566 1326355 pod_ready.go:92] pod "etcd-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:26.040587 1326355 pod_ready.go:81] duration metric: took 6.48146ms waiting for pod "etcd-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.040610 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.040725 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-448128
	I1128 04:38:26.040738 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.040746 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.040753 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.043236 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.043256 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.043264 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.043271 1326355 round_trippers.go:580]     Audit-Id: 56a9a8c0-a305-4785-a158-e1ea38c8355e
	I1128 04:38:26.043277 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.043284 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.043289 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.043296 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.043420 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-448128","namespace":"kube-system","uid":"ed60cc18-21a7-4a58-b1bf-929498ac7681","resourceVersion":"257","creationTimestamp":"2023-11-28T04:36:52Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"c0a401a1d2128f26b499573a61001053","kubernetes.io/config.mirror":"c0a401a1d2128f26b499573a61001053","kubernetes.io/config.seen":"2023-11-28T04:36:52.405629071Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1128 04:38:26.043947 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:26.043955 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.043963 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.043970 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.046443 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.046508 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.046523 1326355 round_trippers.go:580]     Audit-Id: e2319afc-08f3-43c1-bfcb-d7390e01dcd5
	I1128 04:38:26.046531 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.046537 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.046544 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.046550 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.046560 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.046816 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:38:26.047260 1326355 pod_ready.go:92] pod "kube-apiserver-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:26.047280 1326355 pod_ready.go:81] duration metric: took 6.659921ms waiting for pod "kube-apiserver-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.047292 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.047366 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-448128
	I1128 04:38:26.047376 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.047384 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.047391 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.049823 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.049860 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.049869 1326355 round_trippers.go:580]     Audit-Id: 3b58d541-a6ac-4d8c-b941-831d82c5d7e0
	I1128 04:38:26.049875 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.049881 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.049888 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.049897 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.049909 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.050093 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-448128","namespace":"kube-system","uid":"b57f8849-158b-426b-ada5-bb5ea7e23ec8","resourceVersion":"256","creationTimestamp":"2023-11-28T04:36:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2900dc80bd738cc7d6eb7e628235b5db","kubernetes.io/config.mirror":"2900dc80bd738cc7d6eb7e628235b5db","kubernetes.io/config.seen":"2023-11-28T04:36:52.405630704Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1128 04:38:26.050642 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:26.050658 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.050666 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.050674 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.053264 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.053289 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.053298 1326355 round_trippers.go:580]     Audit-Id: 5741a80a-0457-4392-9a8d-4e8f61f56968
	I1128 04:38:26.053305 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.053311 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.053322 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.053334 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.053340 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.053783 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:38:26.054173 1326355 pod_ready.go:92] pod "kube-controller-manager-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:26.054194 1326355 pod_ready.go:81] duration metric: took 6.893454ms waiting for pod "kube-controller-manager-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.054209 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mskz2" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.213601 1326355 request.go:629] Waited for 159.325344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mskz2
	I1128 04:38:26.213679 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mskz2
	I1128 04:38:26.213691 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.213701 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.213713 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.216451 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.216523 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.216547 1326355 round_trippers.go:580]     Audit-Id: ba72e5e5-0541-457e-a01a-ba18e370520a
	I1128 04:38:26.216572 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.216608 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.216621 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.216628 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.216636 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.216814 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mskz2","generateName":"kube-proxy-","namespace":"kube-system","uid":"36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465","resourceVersion":"371","creationTimestamp":"2023-11-28T04:37:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a7620ae4-04a3-4706-a2ec-b3cd57b95023","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a7620ae4-04a3-4706-a2ec-b3cd57b95023\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1128 04:38:26.413685 1326355 request.go:629] Waited for 196.326024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:26.413756 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:26.413766 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.413775 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.413785 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.416435 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.416459 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.416469 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.416476 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.416484 1326355 round_trippers.go:580]     Audit-Id: 53afd843-600f-4f53-9f75-69ada3373351
	I1128 04:38:26.416490 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.416496 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.416518 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.416636 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:38:26.417139 1326355 pod_ready.go:92] pod "kube-proxy-mskz2" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:26.417162 1326355 pod_ready.go:81] duration metric: took 362.945845ms waiting for pod "kube-proxy-mskz2" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.417174 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w85sn" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.613632 1326355 request.go:629] Waited for 196.388318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w85sn
	I1128 04:38:26.613764 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w85sn
	I1128 04:38:26.613778 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.613788 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.613796 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.616680 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.616707 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.616717 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.616723 1326355 round_trippers.go:580]     Audit-Id: d39c1649-e1df-4827-bdbf-91582d14bad4
	I1128 04:38:26.616730 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.616736 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.616743 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.616750 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.616868 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w85sn","generateName":"kube-proxy-","namespace":"kube-system","uid":"8bfd128a-2d87-4d6b-b452-19256df844d9","resourceVersion":"461","creationTimestamp":"2023-11-28T04:37:54Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a7620ae4-04a3-4706-a2ec-b3cd57b95023","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a7620ae4-04a3-4706-a2ec-b3cd57b95023\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1128 04:38:26.813767 1326355 request.go:629] Waited for 196.342435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:26.813837 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128-m02
	I1128 04:38:26.813848 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:26.813856 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:26.813871 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:26.816597 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:26.816625 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:26.816634 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:26.816640 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:26.816647 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:26.816674 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:26 GMT
	I1128 04:38:26.816682 1326355 round_trippers.go:580]     Audit-Id: 249caffd-a18f-4955-8ab6-ccaaba063c0f
	I1128 04:38:26.816692 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:26.816852 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128-m02","uid":"72a5ef10-30e4-4554-9760-44b3d971b7fa","resourceVersion":"495","creationTimestamp":"2023-11-28T04:37:53Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:37:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I1128 04:38:26.817230 1326355 pod_ready.go:92] pod "kube-proxy-w85sn" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:26.817249 1326355 pod_ready.go:81] duration metric: took 400.064811ms waiting for pod "kube-proxy-w85sn" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:26.817262 1326355 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:27.013697 1326355 request.go:629] Waited for 196.364893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-448128
	I1128 04:38:27.013781 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-448128
	I1128 04:38:27.013788 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:27.013797 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:27.013808 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:27.016643 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:27.016741 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:27.016768 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:27.016791 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:27.016826 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:27.016855 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:27 GMT
	I1128 04:38:27.016879 1326355 round_trippers.go:580]     Audit-Id: b7fc130d-0489-4f97-a5e3-50b78092cb6a
	I1128 04:38:27.016901 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:27.017118 1326355 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-448128","namespace":"kube-system","uid":"4857acb0-f079-4948-b7d9-68e443c97acb","resourceVersion":"289","creationTimestamp":"2023-11-28T04:36:51Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8f23caa0ce4cee36da4381e5cce72405","kubernetes.io/config.mirror":"8f23caa0ce4cee36da4381e5cce72405","kubernetes.io/config.seen":"2023-11-28T04:36:44.411368392Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T04:36:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1128 04:38:27.213901 1326355 request.go:629] Waited for 196.256594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:27.214002 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-448128
	I1128 04:38:27.214015 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:27.214025 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:27.214051 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:27.217141 1326355 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 04:38:27.217168 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:27.217184 1326355 round_trippers.go:580]     Audit-Id: 401b1157-7d7d-41c8-a194-b41725fa0d4e
	I1128 04:38:27.217192 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:27.217201 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:27.217225 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:27.217239 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:27.217245 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:27 GMT
	I1128 04:38:27.217363 1326355 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T04:36:49Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1128 04:38:27.217811 1326355 pod_ready.go:92] pod "kube-scheduler-multinode-448128" in "kube-system" namespace has status "Ready":"True"
	I1128 04:38:27.217832 1326355 pod_ready.go:81] duration metric: took 400.562614ms waiting for pod "kube-scheduler-multinode-448128" in "kube-system" namespace to be "Ready" ...
	I1128 04:38:27.217846 1326355 pod_ready.go:38] duration metric: took 1.198615669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:38:27.217863 1326355 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:38:27.217930 1326355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:38:27.232528 1326355 system_svc.go:56] duration metric: took 14.657247ms WaitForService to wait for kubelet.
	I1128 04:38:27.232561 1326355 kubeadm.go:581] duration metric: took 32.280154108s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:38:27.232581 1326355 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:38:27.414041 1326355 request.go:629] Waited for 181.350339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1128 04:38:27.414110 1326355 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1128 04:38:27.414117 1326355 round_trippers.go:469] Request Headers:
	I1128 04:38:27.414126 1326355 round_trippers.go:473]     Accept: application/json, */*
	I1128 04:38:27.414133 1326355 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1128 04:38:27.417146 1326355 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 04:38:27.417219 1326355 round_trippers.go:577] Response Headers:
	I1128 04:38:27.417241 1326355 round_trippers.go:580]     Date: Tue, 28 Nov 2023 04:38:27 GMT
	I1128 04:38:27.417266 1326355 round_trippers.go:580]     Audit-Id: 06ad93a7-ddfe-46cc-a7f2-f663e74ba090
	I1128 04:38:27.417301 1326355 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 04:38:27.417329 1326355 round_trippers.go:580]     Content-Type: application/json
	I1128 04:38:27.417352 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: e294579b-0e2b-414f-83b7-e5c9ad4ca825
	I1128 04:38:27.417375 1326355 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5c85ad40-1d17-44ff-9074-e493cabd5074
	I1128 04:38:27.417610 1326355 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"multinode-448128","uid":"60ed4222-1f04-4ab8-9440-506f7522e633","resourceVersion":"386","creationTimestamp":"2023-11-28T04:36:49Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-448128","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-448128","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T04_36_53_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1128 04:38:27.418284 1326355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:38:27.418310 1326355 node_conditions.go:123] node cpu capacity is 2
	I1128 04:38:27.418321 1326355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:38:27.418327 1326355 node_conditions.go:123] node cpu capacity is 2
	I1128 04:38:27.418332 1326355 node_conditions.go:105] duration metric: took 185.74582ms to run NodePressure ...
	I1128 04:38:27.418349 1326355 start.go:228] waiting for startup goroutines ...
	I1128 04:38:27.418384 1326355 start.go:242] writing updated cluster config ...
	I1128 04:38:27.418720 1326355 ssh_runner.go:195] Run: rm -f paused
	I1128 04:38:27.481266 1326355 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:38:27.484609 1326355 out.go:177] * Done! kubectl is now configured to use "multinode-448128" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 28 04:37:37 multinode-448128 crio[900]: time="2023-11-28 04:37:37.623814495Z" level=info msg="Starting container: 798ded7b1c0598e2d844585d957cd67a6f5e3192936ade687cf7c6227b55ad3b" id=3e41cd94-f578-4fd2-8cf6-7314432d2416 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:37:37 multinode-448128 crio[900]: time="2023-11-28 04:37:37.638637393Z" level=info msg="Started container" PID=1934 containerID=798ded7b1c0598e2d844585d957cd67a6f5e3192936ade687cf7c6227b55ad3b description=kube-system/storage-provisioner/storage-provisioner id=3e41cd94-f578-4fd2-8cf6-7314432d2416 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45c99b4f47c909024f4bdd5b7d2b480400089384d6a8ecaeeeed5239d4d90afe
	Nov 28 04:37:37 multinode-448128 crio[900]: time="2023-11-28 04:37:37.675932736Z" level=info msg="Created container dfafec7ebd237ecb8bd089f4e4b27107cdec214dd6141c8c83a3d43898e1e84f: kube-system/coredns-5dd5756b68-h99h4/coredns" id=347a6f07-5841-4b38-bcc7-52b39179a561 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:37:37 multinode-448128 crio[900]: time="2023-11-28 04:37:37.677155261Z" level=info msg="Starting container: dfafec7ebd237ecb8bd089f4e4b27107cdec214dd6141c8c83a3d43898e1e84f" id=ed30d6a5-64be-4344-a041-557e215912ff name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:37:37 multinode-448128 crio[900]: time="2023-11-28 04:37:37.692926274Z" level=info msg="Started container" PID=1961 containerID=dfafec7ebd237ecb8bd089f4e4b27107cdec214dd6141c8c83a3d43898e1e84f description=kube-system/coredns-5dd5756b68-h99h4/coredns id=ed30d6a5-64be-4344-a041-557e215912ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=9472aeceeaedf7988c11c750c3a740e21adca55521c591c4cd7526cedce23066
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.724006060Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-cpvdq/POD" id=72ef1c62-a289-4390-992b-05bf4fba20fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.724072423Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.741272541Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-cpvdq Namespace:default ID:50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8 UID:fee4ed58-80e0-4240-9df4-12f5f0fdde8e NetNS:/var/run/netns/ba0742f4-0776-4a5e-8c6b-b5e17f6bab2e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.741464433Z" level=info msg="Adding pod default_busybox-5bc68d56bd-cpvdq to CNI network \"kindnet\" (type=ptp)"
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.764999124Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-cpvdq Namespace:default ID:50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8 UID:fee4ed58-80e0-4240-9df4-12f5f0fdde8e NetNS:/var/run/netns/ba0742f4-0776-4a5e-8c6b-b5e17f6bab2e Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.765151262Z" level=info msg="Checking pod default_busybox-5bc68d56bd-cpvdq for CNI network kindnet (type=ptp)"
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.769708253Z" level=info msg="Ran pod sandbox 50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8 with infra container: default/busybox-5bc68d56bd-cpvdq/POD" id=72ef1c62-a289-4390-992b-05bf4fba20fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.770735989Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fbf55811-239c-46e9-8d7e-dfc25da9e62b name=/runtime.v1.ImageService/ImageStatus
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.770988098Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=fbf55811-239c-46e9-8d7e-dfc25da9e62b name=/runtime.v1.ImageService/ImageStatus
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.772154993Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=08c4632c-2223-4e02-83e5-7c89c1b202cc name=/runtime.v1.ImageService/PullImage
	Nov 28 04:38:28 multinode-448128 crio[900]: time="2023-11-28 04:38:28.773601148Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 28 04:38:29 multinode-448128 crio[900]: time="2023-11-28 04:38:29.442945902Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.713536008Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=08c4632c-2223-4e02-83e5-7c89c1b202cc name=/runtime.v1.ImageService/PullImage
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.715576982Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=cb967619-dada-4a2e-82a2-949f3d7a21bb name=/runtime.v1.ImageService/ImageStatus
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.716246107Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cb967619-dada-4a2e-82a2-949f3d7a21bb name=/runtime.v1.ImageService/ImageStatus
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.717324067Z" level=info msg="Creating container: default/busybox-5bc68d56bd-cpvdq/busybox" id=b150c708-c5f5-4224-abad-61c3e7ae184d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.717414479Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.817140158Z" level=info msg="Created container 302766af8503b74375e1839229e8b1b990af22cddc5d960f7c8b78dd525c8c33: default/busybox-5bc68d56bd-cpvdq/busybox" id=b150c708-c5f5-4224-abad-61c3e7ae184d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.818000486Z" level=info msg="Starting container: 302766af8503b74375e1839229e8b1b990af22cddc5d960f7c8b78dd525c8c33" id=fb5d3b59-dc37-4e95-95fb-f94bb7e4dca2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:38:30 multinode-448128 crio[900]: time="2023-11-28 04:38:30.829589243Z" level=info msg="Started container" PID=2091 containerID=302766af8503b74375e1839229e8b1b990af22cddc5d960f7c8b78dd525c8c33 description=default/busybox-5bc68d56bd-cpvdq/busybox id=fb5d3b59-dc37-4e95-95fb-f94bb7e4dca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	302766af8503b       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   50446f3cd6a28       busybox-5bc68d56bd-cpvdq
	dfafec7ebd237       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   9472aeceeaedf       coredns-5dd5756b68-h99h4
	798ded7b1c059       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   45c99b4f47c90       storage-provisioner
	01a357e9ea143       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   01912eafa3511       kindnet-9lv68
	b96278abb67fc       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39                                      About a minute ago   Running             kube-proxy                0                   8100d7138faa7       kube-proxy-mskz2
	dc2c5388a3ece       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419                                      About a minute ago   Running             kube-apiserver            0                   b293e2c102a9e       kube-apiserver-multinode-448128
	cd43fff8e05ae       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b                                      About a minute ago   Running             kube-controller-manager   0                   889cb6942c42e       kube-controller-manager-multinode-448128
	4b7522d0b2d6c       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54                                      About a minute ago   Running             kube-scheduler            0                   ced34ced91a24       kube-scheduler-multinode-448128
	6d2d1e7d3a5e0       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   59647c01b45aa       etcd-multinode-448128
	
	* 
	* ==> coredns [dfafec7ebd237ecb8bd089f4e4b27107cdec214dd6141c8c83a3d43898e1e84f] <==
	* [INFO] 10.244.1.2:40859 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133037s
	[INFO] 10.244.0.3:42126 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130485s
	[INFO] 10.244.0.3:38764 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236334s
	[INFO] 10.244.0.3:37775 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104344s
	[INFO] 10.244.0.3:45100 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062408s
	[INFO] 10.244.0.3:37493 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005593514s
	[INFO] 10.244.0.3:36307 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072386s
	[INFO] 10.244.0.3:33879 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089517s
	[INFO] 10.244.0.3:35741 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067864s
	[INFO] 10.244.1.2:49507 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120861s
	[INFO] 10.244.1.2:36485 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098117s
	[INFO] 10.244.1.2:34569 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076496s
	[INFO] 10.244.1.2:54757 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075618s
	[INFO] 10.244.0.3:52564 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159121s
	[INFO] 10.244.0.3:48180 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077284s
	[INFO] 10.244.0.3:51043 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076955s
	[INFO] 10.244.0.3:57391 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073272s
	[INFO] 10.244.1.2:54408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114568s
	[INFO] 10.244.1.2:36273 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129599s
	[INFO] 10.244.1.2:53587 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000114494s
	[INFO] 10.244.1.2:45363 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125891s
	[INFO] 10.244.0.3:46567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101012s
	[INFO] 10.244.0.3:46757 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000050026s
	[INFO] 10.244.0.3:51612 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049673s
	[INFO] 10.244.0.3:58627 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000046384s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-448128
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-448128
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=multinode-448128
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_36_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-448128
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:38:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:37:37 +0000   Tue, 28 Nov 2023 04:36:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:37:37 +0000   Tue, 28 Nov 2023 04:36:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:37:37 +0000   Tue, 28 Nov 2023 04:36:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:37:37 +0000   Tue, 28 Nov 2023 04:37:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-448128
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 cddd0c8c00e6425cb4b514f034333ca0
	  System UUID:                905713c6-eb48-43f4-8d54-a8ddd886ad31
	  Boot ID:                    29ce650a-e22a-4e0d-bffe-126490eafcf6
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cpvdq                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-h99h4                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     91s
	  kube-system                 etcd-multinode-448128                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-9lv68                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      91s
	  kube-system                 kube-apiserver-multinode-448128             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-448128    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-mskz2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-multinode-448128             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 89s   kube-proxy       
	  Normal  Starting                 104s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s  kubelet          Node multinode-448128 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s  kubelet          Node multinode-448128 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s  kubelet          Node multinode-448128 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s   node-controller  Node multinode-448128 event: Registered Node multinode-448128 in Controller
	  Normal  NodeReady                59s   kubelet          Node multinode-448128 status is now: NodeReady
	
	
	Name:               multinode-448128-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-448128-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:37:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-448128-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:38:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:38:25 +0000   Tue, 28 Nov 2023 04:37:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:38:25 +0000   Tue, 28 Nov 2023 04:37:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:38:25 +0000   Tue, 28 Nov 2023 04:37:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:38:25 +0000   Tue, 28 Nov 2023 04:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-448128-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2cb6b1f48db40fda9208941862b9f21
	  System UUID:                5ce9704b-54aa-46f5-a0d9-1364fdde25d4
	  Boot ID:                    29ce650a-e22a-4e0d-bffe-126490eafcf6
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9h4s8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-5fn4w               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-w85sn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)  kubelet          Node multinode-448128-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)  kubelet          Node multinode-448128-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)  kubelet          Node multinode-448128-m02 status is now: NodeHasSufficientPID
	  Normal  CIDRAssignmentFailed     42s                cidrAllocator    Node multinode-448128-m02 status is now: CIDRAssignmentFailed
	  Normal  RegisteredNode           41s                node-controller  Node multinode-448128-m02 event: Registered Node multinode-448128-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-448128-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001139] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001123] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +0.003665] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=0000000000e844bc
	[  +0.001111] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=00000000dbb2bfbf
	[  +0.001088] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +2.166969] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001027] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000001868d326
	[  +0.001142] FS-Cache: O-key=[8] '4e415c0100000000'
	[  +0.000817] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001108] FS-Cache: N-key=[8] '4e415c0100000000'
	[  +0.392945] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001131] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000007d358928
	[  +0.001121] FS-Cache: O-key=[8] '54415c0100000000'
	[  +0.000768] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000007d93f4ca
	[  +0.001181] FS-Cache: N-key=[8] '54415c0100000000'
	
	* 
	* ==> etcd [6d2d1e7d3a5e0be216582b9919871c178bf942d6549a157b4415acca4b496f38] <==
	* {"level":"info","ts":"2023-11-28T04:36:45.325275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-28T04:36:45.325616Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-28T04:36:45.334824Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T04:36:45.335168Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-28T04:36:45.360608Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-28T04:36:45.360647Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T04:36:45.360706Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T04:36:45.878797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T04:36:45.87893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T04:36:45.878979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-28T04:36:45.879017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:36:45.879049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-28T04:36:45.879086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-28T04:36:45.879123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-28T04:36:45.880778Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:36:45.884865Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-448128 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:36:45.888716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:36:45.888885Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:36:45.888936Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:36:45.888746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:36:45.888765Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:36:45.890132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:36:45.890903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-28T04:36:45.88879Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:36:45.892736Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  04:38:36 up  7:20,  0 users,  load average: 1.61, 2.03, 2.00
	Linux multinode-448128 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [01a357e9ea143aea1ab87d153a3832362b435a3b25a686984fbd8bc67ce71c42] <==
	* I1128 04:37:06.821476       1 main.go:116] setting mtu 1500 for CNI 
	I1128 04:37:06.821537       1 main.go:146] kindnetd IP family: "ipv4"
	I1128 04:37:06.821635       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1128 04:37:37.046599       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1128 04:37:37.063112       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1128 04:37:37.063325       1 main.go:227] handling current node
	I1128 04:37:47.080624       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1128 04:37:47.080850       1 main.go:227] handling current node
	I1128 04:37:57.093486       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1128 04:37:57.093516       1 main.go:227] handling current node
	I1128 04:37:57.093527       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1128 04:37:57.093533       1 main.go:250] Node multinode-448128-m02 has CIDR [10.244.1.0/24] 
	I1128 04:37:57.093679       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1128 04:38:07.106531       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1128 04:38:07.106560       1 main.go:227] handling current node
	I1128 04:38:07.106571       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1128 04:38:07.106577       1 main.go:250] Node multinode-448128-m02 has CIDR [10.244.1.0/24] 
	I1128 04:38:17.111496       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1128 04:38:17.111526       1 main.go:227] handling current node
	I1128 04:38:17.111537       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1128 04:38:17.111542       1 main.go:250] Node multinode-448128-m02 has CIDR [10.244.1.0/24] 
	I1128 04:38:27.125245       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1128 04:38:27.125280       1 main.go:227] handling current node
	I1128 04:38:27.125292       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1128 04:38:27.125298       1 main.go:250] Node multinode-448128-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [dc2c5388a3ecefb104841eaf473ad80babce019ac28952485ffeb35ba8cb38a2] <==
	* I1128 04:36:49.348299       1 controller.go:624] quota admission added evaluator for: namespaces
	E1128 04:36:49.356798       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1128 04:36:49.358674       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1128 04:36:49.358737       1 aggregator.go:166] initial CRD sync complete...
	I1128 04:36:49.358793       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 04:36:49.358799       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 04:36:49.358806       1 cache.go:39] Caches are synced for autoregister controller
	I1128 04:36:49.364955       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1128 04:36:49.559997       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 04:36:50.049651       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1128 04:36:50.056704       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1128 04:36:50.056729       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 04:36:50.595480       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 04:36:50.640080       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1128 04:36:50.758491       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1128 04:36:50.768563       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1128 04:36:50.769891       1 controller.go:624] quota admission added evaluator for: endpoints
	I1128 04:36:50.774679       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1128 04:36:51.243430       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 04:36:52.298821       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 04:36:52.313796       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1128 04:36:52.337321       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1128 04:37:05.745116       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1128 04:37:05.795101       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1128 04:38:32.377555       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:44900->192.168.58.3:10250: write: broken pipe
	
	* 
	* ==> kube-controller-manager [cd43fff8e05ae3b7a3e67b20f0d4bf80d8f4bc27ba6bd805004a85e74ed497fc] <==
	* I1128 04:37:40.094130       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1128 04:37:54.017919       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-448128-m02\" does not exist"
	I1128 04:37:54.044719       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w85sn"
	I1128 04:37:54.049449       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5fn4w"
	I1128 04:37:54.074136       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-448128-m02" podCIDRs=["10.244.1.0/24"]
	E1128 04:37:54.111851       1 range_allocator.go:385] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-448128-m02\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" node="multinode-448128-m02" podCIDRs=["10.244.2.0/24"]
	E1128 04:37:54.112095       1 range_allocator.go:391] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-448128-m02\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" node="multinode-448128-m02"
	E1128 04:37:54.112177       1 range_allocator.go:368] "Node already has a CIDR allocated. Releasing the new one" node="multinode-448128-m02" podCIDRs=["10.244.1.0/24"]
	I1128 04:37:54.112541       1 event.go:307] "Event occurred" object="multinode-448128-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="CIDRAssignmentFailed" message="Node multinode-448128-m02 status is now: CIDRAssignmentFailed"
	I1128 04:37:55.098958       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-448128-m02"
	I1128 04:37:55.099084       1 event.go:307] "Event occurred" object="multinode-448128-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-448128-m02 event: Registered Node multinode-448128-m02 in Controller"
	I1128 04:38:25.543428       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-448128-m02"
	I1128 04:38:28.359825       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1128 04:38:28.386792       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9h4s8"
	I1128 04:38:28.403871       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-cpvdq"
	I1128 04:38:28.416650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.549505ms"
	I1128 04:38:28.435818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.105347ms"
	I1128 04:38:28.435891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.061µs"
	I1128 04:38:28.441917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.396µs"
	I1128 04:38:28.450852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.908µs"
	I1128 04:38:30.125811       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-9h4s8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-9h4s8"
	I1128 04:38:31.648343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.450907ms"
	I1128 04:38:31.648525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.709µs"
	I1128 04:38:31.716898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.563434ms"
	I1128 04:38:31.716973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="36.808µs"
	
	* 
	* ==> kube-proxy [b96278abb67fcdffcdc96ba47a0086c7823695556d9573ad26fe8d1811378d3e] <==
	* I1128 04:37:06.983241       1 server_others.go:69] "Using iptables proxy"
	I1128 04:37:07.000915       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1128 04:37:07.038670       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1128 04:37:07.040923       1 server_others.go:152] "Using iptables Proxier"
	I1128 04:37:07.041021       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1128 04:37:07.041052       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1128 04:37:07.041110       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:37:07.041359       1 server.go:846] "Version info" version="v1.28.4"
	I1128 04:37:07.041600       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:37:07.042434       1 config.go:188] "Starting service config controller"
	I1128 04:37:07.042551       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:37:07.042611       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:37:07.042652       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:37:07.043196       1 config.go:315] "Starting node config controller"
	I1128 04:37:07.045751       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:37:07.146412       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:37:07.146562       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:37:07.147835       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4b7522d0b2d6c8137e81de3e737a0235acca3777af778892ec92c78c641ccd93] <==
	* W1128 04:36:49.318361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 04:36:49.318391       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1128 04:36:49.318464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:36:49.318482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 04:36:49.318510       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:36:49.318552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 04:36:49.318587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:36:49.318639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 04:36:49.318661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:36:49.318693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:36:49.318712       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:36:49.318702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:36:49.318763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:36:49.318780       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 04:36:49.318623       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:36:49.318819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:36:49.318834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:36:49.318853       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:36:50.138242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:36:50.138281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 04:36:50.174342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:36:50.174389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:36:50.282019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:36:50.282052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1128 04:36:50.904230       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922724    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b64f475-0c18-4eeb-9cf5-99cfc90e09c6-lib-modules\") pod \"kindnet-9lv68\" (UID: \"8b64f475-0c18-4eeb-9cf5-99cfc90e09c6\") " pod="kube-system/kindnet-9lv68"
	Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922747    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465-kube-proxy\") pod \"kube-proxy-mskz2\" (UID: \"36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465\") " pod="kube-system/kube-proxy-mskz2"
	Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922778    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjh4r\" (UniqueName: \"kubernetes.io/projected/8b64f475-0c18-4eeb-9cf5-99cfc90e09c6-kube-api-access-jjh4r\") pod \"kindnet-9lv68\" (UID: \"8b64f475-0c18-4eeb-9cf5-99cfc90e09c6\") " pod="kube-system/kindnet-9lv68"
	Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922808    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4g28\" (UniqueName: \"kubernetes.io/projected/36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465-kube-api-access-k4g28\") pod \"kube-proxy-mskz2\" (UID: \"36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465\") " pod="kube-system/kube-proxy-mskz2"
	Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922834    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b64f475-0c18-4eeb-9cf5-99cfc90e09c6-cni-cfg\") pod \"kindnet-9lv68\" (UID: \"8b64f475-0c18-4eeb-9cf5-99cfc90e09c6\") " pod="kube-system/kindnet-9lv68"
	Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922863    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465-xtables-lock\") pod \"kube-proxy-mskz2\" (UID: \"36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465\") " pod="kube-system/kube-proxy-mskz2"
	Nov 28 04:37:05 multinode-448128 kubelet[1387]: I1128 04:37:05.922886    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465-lib-modules\") pod \"kube-proxy-mskz2\" (UID: \"36c9eac9-1c3a-4b4e-b10b-2dcb68cfb465\") " pod="kube-system/kube-proxy-mskz2"
	Nov 28 04:37:06 multinode-448128 kubelet[1387]: W1128 04:37:06.227372    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/crio-8100d7138faa7e56fc1a2f96ad23b29afb35bd84518b1310ad6ec61591332f2a WatchSource:0}: Error finding container 8100d7138faa7e56fc1a2f96ad23b29afb35bd84518b1310ad6ec61591332f2a: Status 404 returned error can't find the container with id 8100d7138faa7e56fc1a2f96ad23b29afb35bd84518b1310ad6ec61591332f2a
	Nov 28 04:37:07 multinode-448128 kubelet[1387]: I1128 04:37:07.571560    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mskz2" podStartSLOduration=2.571513916 podCreationTimestamp="2023-11-28 04:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 04:37:07.557405409 +0000 UTC m=+15.281755632" watchObservedRunningTime="2023-11-28 04:37:07.571513916 +0000 UTC m=+15.295864147"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.156016    1387 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.186567    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9lv68" podStartSLOduration=32.186499871 podCreationTimestamp="2023-11-28 04:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 04:37:07.571882562 +0000 UTC m=+15.296232793" watchObservedRunningTime="2023-11-28 04:37:37.186499871 +0000 UTC m=+44.910850102"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.186977    1387 topology_manager.go:215] "Topology Admit Handler" podUID="770a2e4e-e096-47e0-81a9-0623bbaa4825" podNamespace="kube-system" podName="coredns-5dd5756b68-h99h4"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.188625    1387 topology_manager.go:215] "Topology Admit Handler" podUID="d0447b95-909c-4274-82c0-d916436e0f3e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.245632    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/770a2e4e-e096-47e0-81a9-0623bbaa4825-config-volume\") pod \"coredns-5dd5756b68-h99h4\" (UID: \"770a2e4e-e096-47e0-81a9-0623bbaa4825\") " pod="kube-system/coredns-5dd5756b68-h99h4"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.245692    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d0447b95-909c-4274-82c0-d916436e0f3e-tmp\") pod \"storage-provisioner\" (UID: \"d0447b95-909c-4274-82c0-d916436e0f3e\") " pod="kube-system/storage-provisioner"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.245722    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcmx\" (UniqueName: \"kubernetes.io/projected/d0447b95-909c-4274-82c0-d916436e0f3e-kube-api-access-fgcmx\") pod \"storage-provisioner\" (UID: \"d0447b95-909c-4274-82c0-d916436e0f3e\") " pod="kube-system/storage-provisioner"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: I1128 04:37:37.245747    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d7zj\" (UniqueName: \"kubernetes.io/projected/770a2e4e-e096-47e0-81a9-0623bbaa4825-kube-api-access-5d7zj\") pod \"coredns-5dd5756b68-h99h4\" (UID: \"770a2e4e-e096-47e0-81a9-0623bbaa4825\") " pod="kube-system/coredns-5dd5756b68-h99h4"
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: W1128 04:37:37.534968    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/crio-45c99b4f47c909024f4bdd5b7d2b480400089384d6a8ecaeeeed5239d4d90afe WatchSource:0}: Error finding container 45c99b4f47c909024f4bdd5b7d2b480400089384d6a8ecaeeeed5239d4d90afe: Status 404 returned error can't find the container with id 45c99b4f47c909024f4bdd5b7d2b480400089384d6a8ecaeeeed5239d4d90afe
	Nov 28 04:37:37 multinode-448128 kubelet[1387]: W1128 04:37:37.537892    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/crio-9472aeceeaedf7988c11c750c3a740e21adca55521c591c4cd7526cedce23066 WatchSource:0}: Error finding container 9472aeceeaedf7988c11c750c3a740e21adca55521c591c4cd7526cedce23066: Status 404 returned error can't find the container with id 9472aeceeaedf7988c11c750c3a740e21adca55521c591c4cd7526cedce23066
	Nov 28 04:37:38 multinode-448128 kubelet[1387]: I1128 04:37:38.626005    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-h99h4" podStartSLOduration=33.625960959 podCreationTimestamp="2023-11-28 04:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 04:37:38.610046554 +0000 UTC m=+46.334396776" watchObservedRunningTime="2023-11-28 04:37:38.625960959 +0000 UTC m=+46.350311182"
	Nov 28 04:38:28 multinode-448128 kubelet[1387]: I1128 04:38:28.420812    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=81.420768571 podCreationTimestamp="2023-11-28 04:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 04:37:38.645330386 +0000 UTC m=+46.369680609" watchObservedRunningTime="2023-11-28 04:38:28.420768571 +0000 UTC m=+96.145118794"
	Nov 28 04:38:28 multinode-448128 kubelet[1387]: I1128 04:38:28.422032    1387 topology_manager.go:215] "Topology Admit Handler" podUID="fee4ed58-80e0-4240-9df4-12f5f0fdde8e" podNamespace="default" podName="busybox-5bc68d56bd-cpvdq"
	Nov 28 04:38:28 multinode-448128 kubelet[1387]: I1128 04:38:28.535885    1387 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx4jx\" (UniqueName: \"kubernetes.io/projected/fee4ed58-80e0-4240-9df4-12f5f0fdde8e-kube-api-access-lx4jx\") pod \"busybox-5bc68d56bd-cpvdq\" (UID: \"fee4ed58-80e0-4240-9df4-12f5f0fdde8e\") " pod="default/busybox-5bc68d56bd-cpvdq"
	Nov 28 04:38:28 multinode-448128 kubelet[1387]: W1128 04:38:28.767889    1387 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/crio-50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8 WatchSource:0}: Error finding container 50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8: Status 404 returned error can't find the container with id 50446f3cd6a2836ab6bc73aaf13a2e323b282b3d9798129e687b5abed361a0f8
	Nov 28 04:38:31 multinode-448128 kubelet[1387]: I1128 04:38:31.710610    1387 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-cpvdq" podStartSLOduration=1.767184137 podCreationTimestamp="2023-11-28 04:38:28 +0000 UTC" firstStartedPulling="2023-11-28 04:38:28.7711597 +0000 UTC m=+96.495509922" lastFinishedPulling="2023-11-28 04:38:30.714541246 +0000 UTC m=+98.438891469" observedRunningTime="2023-11-28 04:38:31.710375646 +0000 UTC m=+99.434725869" watchObservedRunningTime="2023-11-28 04:38:31.710565684 +0000 UTC m=+99.434915932"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-448128 -n multinode-448128
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-448128 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1021681093.exe start -p running-upgrade-571883 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1128 04:53:53.035617 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:54:15.723810 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1021681093.exe start -p running-upgrade-571883 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.17869018s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-571883 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-571883 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.747527928s)

                                                
                                                
-- stdout --
	* [running-upgrade-571883] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-571883 in cluster running-upgrade-571883
	* Pulling base image ...
	* Updating the running docker "running-upgrade-571883" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:54:28.403360 1383206 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:54:28.403713 1383206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:54:28.403743 1383206 out.go:309] Setting ErrFile to fd 2...
	I1128 04:54:28.403763 1383206 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:54:28.404072 1383206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:54:28.404530 1383206 out.go:303] Setting JSON to false
	I1128 04:54:28.405866 1383206 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27403,"bootTime":1701119865,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:54:28.406040 1383206 start.go:138] virtualization:  
	I1128 04:54:28.409513 1383206 out.go:177] * [running-upgrade-571883] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:54:28.411522 1383206 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1128 04:54:28.418441 1383206 notify.go:220] Checking for updates...
	I1128 04:54:28.423685 1383206 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:54:28.426266 1383206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:54:28.428011 1383206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:54:28.429825 1383206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:54:28.431496 1383206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:54:28.433238 1383206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:54:28.435766 1383206 config.go:182] Loaded profile config "running-upgrade-571883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:54:28.438140 1383206 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 04:54:28.439815 1383206 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:54:28.489427 1383206 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:54:28.489537 1383206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:54:28.706044 1383206 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-28 04:54:28.689746634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:54:28.706173 1383206 docker.go:295] overlay module found
	I1128 04:54:28.710429 1383206 out.go:177] * Using the docker driver based on existing profile
	I1128 04:54:28.712413 1383206 start.go:298] selected driver: docker
	I1128 04:54:28.712436 1383206 start.go:902] validating driver "docker" against &{Name:running-upgrade-571883 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-571883 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.236 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 04:54:28.712740 1383206 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:54:28.713694 1383206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:54:28.780790 1383206 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1128 04:54:28.897996 1383206 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-28 04:54:28.882550506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:54:28.898353 1383206 cni.go:84] Creating CNI manager for ""
	I1128 04:54:28.898373 1383206 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:54:28.898410 1383206 start_flags.go:323] config:
	{Name:running-upgrade-571883 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-571883 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.236 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 04:54:28.900738 1383206 out.go:177] * Starting control plane node running-upgrade-571883 in cluster running-upgrade-571883
	I1128 04:54:28.902511 1383206 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:54:28.904282 1383206 out.go:177] * Pulling base image ...
	I1128 04:54:28.906024 1383206 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1128 04:54:28.906211 1383206 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1128 04:54:28.932283 1383206 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1128 04:54:28.932307 1383206 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1128 04:54:28.970964 1383206 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1128 04:54:28.971170 1383206 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/running-upgrade-571883/config.json ...
	I1128 04:54:28.971418 1383206 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:54:28.971484 1383206 start.go:365] acquiring machines lock for running-upgrade-571883: {Name:mkfdc7ae19f34ddbb04237bfd12009d00025ef1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.971537 1383206 start.go:369] acquired machines lock for "running-upgrade-571883" in 34.404µs
	I1128 04:54:28.971552 1383206 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:54:28.971557 1383206 fix.go:54] fixHost starting: 
	I1128 04:54:28.971826 1383206 cli_runner.go:164] Run: docker container inspect running-upgrade-571883 --format={{.State.Status}}
	I1128 04:54:28.972110 1383206 cache.go:107] acquiring lock: {Name:mka9a2e991eba10434a66f00ab2058fa051639a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972179 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 04:54:28.972188 1383206 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.055µs
	I1128 04:54:28.972198 1383206 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 04:54:28.972208 1383206 cache.go:107] acquiring lock: {Name:mk26c91ba81ebf0485d3d2b4159504e48036386d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972240 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1128 04:54:28.972245 1383206 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.006µs
	I1128 04:54:28.972252 1383206 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1128 04:54:28.972262 1383206 cache.go:107] acquiring lock: {Name:mkbc5e1711f09ecc0008e09508e418955e4198e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972289 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1128 04:54:28.972294 1383206 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 33.485µs
	I1128 04:54:28.972301 1383206 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1128 04:54:28.972309 1383206 cache.go:107] acquiring lock: {Name:mk94967ebcdd58bf9df6dfb0725979bab0037761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972336 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1128 04:54:28.972341 1383206 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 32.534µs
	I1128 04:54:28.972347 1383206 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1128 04:54:28.972356 1383206 cache.go:107] acquiring lock: {Name:mk5c96306b5f465374d8e9f4ce93d0675ac42790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972386 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1128 04:54:28.972393 1383206 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 35.233µs
	I1128 04:54:28.972399 1383206 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1128 04:54:28.972407 1383206 cache.go:107] acquiring lock: {Name:mk673aa97dd638eab8157fa37803fc00f97475e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972433 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1128 04:54:28.972437 1383206 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 31.032µs
	I1128 04:54:28.972444 1383206 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1128 04:54:28.972453 1383206 cache.go:107] acquiring lock: {Name:mka6833120e0559f634e3494d0d83941d371ad59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972481 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1128 04:54:28.972486 1383206 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 33.641µs
	I1128 04:54:28.972492 1383206 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1128 04:54:28.972506 1383206 cache.go:107] acquiring lock: {Name:mk9966ff9e63ed5631198d157e2b6d006484f44a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:54:28.972551 1383206 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1128 04:54:28.972556 1383206 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 55.458µs
	I1128 04:54:28.972563 1383206 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1128 04:54:28.972569 1383206 cache.go:87] Successfully saved all images to host disk.
	I1128 04:54:28.998508 1383206 fix.go:102] recreateIfNeeded on running-upgrade-571883: state=Running err=<nil>
	W1128 04:54:28.998534 1383206 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:54:29.002307 1383206 out.go:177] * Updating the running docker "running-upgrade-571883" container ...
	I1128 04:54:29.004638 1383206 machine.go:88] provisioning docker machine ...
	I1128 04:54:29.004779 1383206 ubuntu.go:169] provisioning hostname "running-upgrade-571883"
	I1128 04:54:29.004911 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:29.026959 1383206 main.go:141] libmachine: Using SSH client type: native
	I1128 04:54:29.027390 1383206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34500 <nil> <nil>}
	I1128 04:54:29.027403 1383206 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-571883 && echo "running-upgrade-571883" | sudo tee /etc/hostname
	I1128 04:54:29.196563 1383206 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-571883
	
	I1128 04:54:29.196756 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:29.220051 1383206 main.go:141] libmachine: Using SSH client type: native
	I1128 04:54:29.220453 1383206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34500 <nil> <nil>}
	I1128 04:54:29.220470 1383206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-571883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-571883/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-571883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:54:29.362215 1383206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:54:29.362245 1383206 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:54:29.362274 1383206 ubuntu.go:177] setting up certificates
	I1128 04:54:29.362284 1383206 provision.go:83] configureAuth start
	I1128 04:54:29.362346 1383206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-571883
	I1128 04:54:29.383639 1383206 provision.go:138] copyHostCerts
	I1128 04:54:29.383703 1383206 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:54:29.383729 1383206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:54:29.383817 1383206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:54:29.383971 1383206 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:54:29.383982 1383206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:54:29.384013 1383206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:54:29.384078 1383206 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:54:29.384088 1383206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:54:29.384113 1383206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:54:29.384160 1383206 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-571883 san=[192.168.70.236 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-571883]
	I1128 04:54:30.384274 1383206 provision.go:172] copyRemoteCerts
	I1128 04:54:30.384343 1383206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:54:30.384396 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:30.402777 1383206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34500 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/running-upgrade-571883/id_rsa Username:docker}
	I1128 04:54:30.506068 1383206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:54:30.530477 1383206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 04:54:30.557800 1383206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:54:30.583932 1383206 provision.go:86] duration metric: configureAuth took 1.221634069s
	I1128 04:54:30.584004 1383206 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:54:30.584190 1383206 config.go:182] Loaded profile config "running-upgrade-571883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:54:30.584299 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:30.607420 1383206 main.go:141] libmachine: Using SSH client type: native
	I1128 04:54:30.607839 1383206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34500 <nil> <nil>}
	I1128 04:54:30.607872 1383206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:54:31.200639 1383206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:54:31.200693 1383206 machine.go:91] provisioned docker machine in 2.195926198s
	I1128 04:54:31.200706 1383206 start.go:300] post-start starting for "running-upgrade-571883" (driver="docker")
	I1128 04:54:31.200718 1383206 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:54:31.200797 1383206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:54:31.200850 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:31.220573 1383206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34500 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/running-upgrade-571883/id_rsa Username:docker}
	I1128 04:54:31.323489 1383206 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:54:31.330261 1383206 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:54:31.330295 1383206 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:54:31.330307 1383206 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:54:31.330344 1383206 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1128 04:54:31.330362 1383206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:54:31.330444 1383206 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:54:31.330567 1383206 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:54:31.330730 1383206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:54:31.341151 1383206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:54:31.369259 1383206 start.go:303] post-start completed in 168.534769ms
	I1128 04:54:31.369361 1383206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:54:31.369406 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:31.396900 1383206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34500 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/running-upgrade-571883/id_rsa Username:docker}
	I1128 04:54:31.496608 1383206 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:54:31.505962 1383206 fix.go:56] fixHost completed within 2.534396313s
	I1128 04:54:31.505991 1383206 start.go:83] releasing machines lock for "running-upgrade-571883", held for 2.534445305s
	I1128 04:54:31.506075 1383206 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-571883
	I1128 04:54:31.536367 1383206 ssh_runner.go:195] Run: cat /version.json
	I1128 04:54:31.536434 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:31.536689 1383206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:54:31.536757 1383206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-571883
	I1128 04:54:31.562765 1383206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34500 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/running-upgrade-571883/id_rsa Username:docker}
	I1128 04:54:31.582451 1383206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34500 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/running-upgrade-571883/id_rsa Username:docker}
	W1128 04:54:31.669740 1383206 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 04:54:31.669884 1383206 ssh_runner.go:195] Run: systemctl --version
	I1128 04:54:31.805341 1383206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:54:32.022558 1383206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:54:32.032852 1383206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:54:32.067511 1383206 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:54:32.067603 1383206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:54:32.106785 1383206 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:54:32.106809 1383206 start.go:472] detecting cgroup driver to use...
	I1128 04:54:32.106842 1383206 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:54:32.106891 1383206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:54:32.177337 1383206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:54:32.206154 1383206 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:54:32.206268 1383206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:54:32.221201 1383206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:54:32.236393 1383206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 04:54:32.257737 1383206 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 04:54:32.257819 1383206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:54:32.485295 1383206 docker.go:219] disabling docker service ...
	I1128 04:54:32.485365 1383206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:54:32.503265 1383206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:54:32.519560 1383206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:54:32.734704 1383206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:54:32.956791 1383206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:54:32.971421 1383206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:54:33.011886 1383206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1128 04:54:33.011979 1383206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:54:33.035617 1383206 out.go:177] 
	W1128 04:54:33.037633 1383206 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 04:54:33.037665 1383206 out.go:239] * 
	* 
	W1128 04:54:33.039031 1383206 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 04:54:33.042231 1383206 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-571883 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-28 04:54:33.091191621 +0000 UTC m=+2504.440649106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-571883
helpers_test.go:235: (dbg) docker inspect running-upgrade-571883:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5ebd4059ebeaaf08c81219ae0dac72b16607cf4ab325da8ea49cfc27986a904",
	        "Created": "2023-11-28T04:53:43.238286799Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1380215,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:53:43.776261673Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/f5ebd4059ebeaaf08c81219ae0dac72b16607cf4ab325da8ea49cfc27986a904/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5ebd4059ebeaaf08c81219ae0dac72b16607cf4ab325da8ea49cfc27986a904/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5ebd4059ebeaaf08c81219ae0dac72b16607cf4ab325da8ea49cfc27986a904/hosts",
	        "LogPath": "/var/lib/docker/containers/f5ebd4059ebeaaf08c81219ae0dac72b16607cf4ab325da8ea49cfc27986a904/f5ebd4059ebeaaf08c81219ae0dac72b16607cf4ab325da8ea49cfc27986a904-json.log",
	        "Name": "/running-upgrade-571883",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-571883:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-571883",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f618e1623ecfff6d2f7328c6b3919de2f179031d08735ad25a2d999c17491eff-init/diff:/var/lib/docker/overlay2/a3ee60eef3eaaf47337b6f0781539ad15febb1ee267fc9f2e5886c941d64e816/diff:/var/lib/docker/overlay2/6faef0ac3025868a9d1e3f52b26728dabcf216dae6b126bb824c95a290e996a3/diff:/var/lib/docker/overlay2/bae46008e5104c3a558d0689ad17cdac59f3808b90523a03801420113dd6a6ff/diff:/var/lib/docker/overlay2/6140e220b6c231fb7ffc53b234ca59e99a1059414845fc59284de9fa015da70a/diff:/var/lib/docker/overlay2/7281f15c2802e99717d15806f1695eb3a617fb6bd00903e495db69929792cdc8/diff:/var/lib/docker/overlay2/787bb9098c14bf34ee5c05b61d350ebeeb4f0e850a79657d271297feec01e693/diff:/var/lib/docker/overlay2/ae0d756a3da693bc34927ae7a0673637f53723b7f7572048238233d7efb77775/diff:/var/lib/docker/overlay2/a409c5e87bb1c4eb9849ccc24d2f328f782fc44d17214ec3578f8ae398d113c3/diff:/var/lib/docker/overlay2/d803bb675aa339e7d11bf12210863d75020893dc8f321ccf1dfb0ecf20ab52c7/diff:/var/lib/docker/overlay2/43dbfd
fa3b1b2ddc4c720b3f78ef9ac0541a4b79cbaa133e9e96c4bdce060d3d/diff:/var/lib/docker/overlay2/2913a2498b6a1dc1dfc2a57622cfa3b280b39f60170ddeb9a55a52d167bc4c74/diff:/var/lib/docker/overlay2/cd7c7c98a9b8f2ca95349723b99ebfe24242f66e9ebd48e6cb9bc4fbbb2ad555/diff:/var/lib/docker/overlay2/4ced315dfeb0f6fc844777a9aef6ea392de2b9800f3fbed0f7b5c7b37904d066/diff:/var/lib/docker/overlay2/cc42392a5860c1b8fce1ce24369e38d055c4dba573843c14d4bc0fcbda34ca5e/diff:/var/lib/docker/overlay2/ac3a4791e967449af5a1bc73d1c7a165768596fd25ebe5f2eebc9d435599f37b/diff:/var/lib/docker/overlay2/7c2ae595dbad21977f810eca2c983ff499a1acdbce437b00e36e51c085ea5d41/diff:/var/lib/docker/overlay2/f7a51a696d4478d24ea5eab6e78ac97a4000766fde90b5807d944ba36019aff5/diff:/var/lib/docker/overlay2/a0496365ae9ef02a50c6f4cad612c3dfb94161b67a176d3abe7baae7bda7b0a7/diff:/var/lib/docker/overlay2/aa47576e74f82cae7b41ceeca1c8ae2b3e6ee4593231618f52e3049f84140efb/diff:/var/lib/docker/overlay2/7eb68b758d5012be6905052b3d7cdcd8e3cefabb79a26a8dd1b0018b1914551f/diff:/var/lib/d
ocker/overlay2/0e7a0288fc9f7a08b4d153b12e43182978239d48dd3e0f6a570c0bae14247f10/diff:/var/lib/docker/overlay2/1ffb8892e639fde2d59e27476878f99ef6d44ec92a2cb517fe6cc028f6bf18da/diff:/var/lib/docker/overlay2/dcbcc5fb9f4277eb5d5eac20a3f16c7e4b2929ee1cdd4b423771fb46a5d785e0/diff:/var/lib/docker/overlay2/1cd16cc670e378085eb93c13e6834d7263735f8f1999d896d76044f4941b8842/diff:/var/lib/docker/overlay2/3d917e0c4a26ccde625fabe74f8d60018eefd5fb3a70fa669136678ad99044b7/diff:/var/lib/docker/overlay2/5bea8da80513af43768be0ce4d92065d77df41408cb39bed1a25b308da2a805b/diff:/var/lib/docker/overlay2/ebd98219a25924c60a314e406ea93fac96e34f50d7b565d2ac4753d8167070a5/diff:/var/lib/docker/overlay2/d56b112a682d6470cfcee01537dca205b00b3d3eb4a757d4a392b541fc2cce90/diff:/var/lib/docker/overlay2/7826f8248992b4c2dfd9e2b03ddb5743ca2df0de6281c3cea2fcd75ed1c1e9e0/diff:/var/lib/docker/overlay2/12eeb5b41d19ac8b0bedb4493000a75fcf2db98afc7b511db8e5f878a0f2ab63/diff:/var/lib/docker/overlay2/b3120744803006afe3d39cd77a65397c78835da34d22f02983d25752b9f
0c34b/diff:/var/lib/docker/overlay2/2845f1cb55f870d0d6d4d053790459015a538c801a2ddccc73e6134167fbcae8/diff:/var/lib/docker/overlay2/3741bdb7a1192cc59cd93243be0134c2e95d0f2db2e44810917eb3f7f5e98554/diff:/var/lib/docker/overlay2/e64f4c27fa6b5e19206b3e136b71cf5ece4ce5d0318db36570de79cb2d983a9e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f618e1623ecfff6d2f7328c6b3919de2f179031d08735ad25a2d999c17491eff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f618e1623ecfff6d2f7328c6b3919de2f179031d08735ad25a2d999c17491eff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f618e1623ecfff6d2f7328c6b3919de2f179031d08735ad25a2d999c17491eff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-571883",
	                "Source": "/var/lib/docker/volumes/running-upgrade-571883/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-571883",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-571883",
	                "name.minikube.sigs.k8s.io": "running-upgrade-571883",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f03079b0f971224ced85c6c89fa93760939a3ccb57ff11fdb2f760b3665582dd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34500"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34497"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f03079b0f971",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-571883": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.236"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f5ebd4059ebe",
	                        "running-upgrade-571883"
	                    ],
	                    "NetworkID": "4b8e4482194191ad9bdd7fea65c56559ed989f8c60a82856751b8a9aba6cc2d4",
	                    "EndpointID": "5e71214341ce31f624b9878e9cca90672e51d5884b5f4154b960048bc11ff39a",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.236",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:ec",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-571883 -n running-upgrade-571883
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-571883 -n running-upgrade-571883: exit status 4 (619.747135ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:54:33.676594 1383932 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-571883" does not appear in /home/jenkins/minikube-integration/17671-1256059/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-571883" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-571883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-571883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-571883: (3.455651553s)
--- FAIL: TestRunningBinaryUpgrade (70.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.25s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.3368857994.exe start -p missing-upgrade-934743 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.3368857994.exe start -p missing-upgrade-934743 --memory=2200 --driver=docker  --container-runtime=crio: (1m34.490457618s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-934743
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-934743: (2.054105041s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-934743
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-934743 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1128 04:51:12.678949 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-934743 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (36.466085923s)

                                                
                                                
-- stdout --
	* [missing-upgrade-934743] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-934743 in cluster missing-upgrade-934743
	* Pulling base image ...
	* docker "missing-upgrade-934743" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:50:49.944243 1366947 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:50:49.944445 1366947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:50:49.944474 1366947 out.go:309] Setting ErrFile to fd 2...
	I1128 04:50:49.944496 1366947 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:50:49.945709 1366947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:50:49.946148 1366947 out.go:303] Setting JSON to false
	I1128 04:50:49.947217 1366947 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27185,"bootTime":1701119865,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:50:49.947330 1366947 start.go:138] virtualization:  
	I1128 04:50:49.950581 1366947 out.go:177] * [missing-upgrade-934743] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:50:49.952804 1366947 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:50:49.952967 1366947 notify.go:220] Checking for updates...
	I1128 04:50:49.956408 1366947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:50:49.958558 1366947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:50:49.960162 1366947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:50:49.961570 1366947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:50:49.963281 1366947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:50:49.965309 1366947 config.go:182] Loaded profile config "missing-upgrade-934743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:50:49.967560 1366947 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 04:50:49.969097 1366947 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:50:50.002199 1366947 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:50:50.002332 1366947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:50:50.087922 1366947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:50:50.07711249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:50:50.088092 1366947 docker.go:295] overlay module found
	I1128 04:50:50.090434 1366947 out.go:177] * Using the docker driver based on existing profile
	I1128 04:50:50.092407 1366947 start.go:298] selected driver: docker
	I1128 04:50:50.092431 1366947 start.go:902] validating driver "docker" against &{Name:missing-upgrade-934743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-934743 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.88 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 04:50:50.092540 1366947 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:50:50.093351 1366947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:50:50.161507 1366947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:50:50.151996045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:50:50.161866 1366947 cni.go:84] Creating CNI manager for ""
	I1128 04:50:50.161887 1366947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:50:50.161899 1366947 start_flags.go:323] config:
	{Name:missing-upgrade-934743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-934743 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.88 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 04:50:50.163991 1366947 out.go:177] * Starting control plane node missing-upgrade-934743 in cluster missing-upgrade-934743
	I1128 04:50:50.165886 1366947 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:50:50.167838 1366947 out.go:177] * Pulling base image ...
	I1128 04:50:50.169395 1366947 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1128 04:50:50.169488 1366947 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1128 04:50:50.188448 1366947 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1128 04:50:50.188631 1366947 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1128 04:50:50.189200 1366947 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1128 04:50:50.245981 1366947 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1128 04:50:50.246137 1366947 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/missing-upgrade-934743/config.json ...
	I1128 04:50:50.246254 1366947 cache.go:107] acquiring lock: {Name:mka9a2e991eba10434a66f00ab2058fa051639a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.246368 1366947 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 04:50:50.246406 1366947 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 157.907µs
	I1128 04:50:50.246436 1366947 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 04:50:50.246437 1366947 cache.go:107] acquiring lock: {Name:mk26c91ba81ebf0485d3d2b4159504e48036386d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.246470 1366947 cache.go:107] acquiring lock: {Name:mk94967ebcdd58bf9df6dfb0725979bab0037761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.246581 1366947 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1128 04:50:50.246869 1366947 cache.go:107] acquiring lock: {Name:mka6833120e0559f634e3494d0d83941d371ad59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.247107 1366947 cache.go:107] acquiring lock: {Name:mk673aa97dd638eab8157fa37803fc00f97475e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.247377 1366947 cache.go:107] acquiring lock: {Name:mk9966ff9e63ed5631198d157e2b6d006484f44a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.247633 1366947 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1128 04:50:50.246412 1366947 cache.go:107] acquiring lock: {Name:mk5c96306b5f465374d8e9f4ce93d0675ac42790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.247857 1366947 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1128 04:50:50.246453 1366947 cache.go:107] acquiring lock: {Name:mkbc5e1711f09ecc0008e09508e418955e4198e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:50.248818 1366947 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1128 04:50:50.248005 1366947 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1128 04:50:50.248137 1366947 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1128 04:50:50.248163 1366947 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1128 04:50:50.248729 1366947 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1128 04:50:50.249674 1366947 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1128 04:50:50.249948 1366947 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1128 04:50:50.250373 1366947 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1128 04:50:50.250637 1366947 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1128 04:50:50.250794 1366947 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1128 04:50:50.251295 1366947 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	W1128 04:50:50.570105 1366947 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1128 04:50:50.570221 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W1128 04:50:50.606003 1366947 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1128 04:50:50.606156 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1128 04:50:50.635857 1366947 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1128 04:50:50.635981 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1128 04:50:50.637318 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1128 04:50:50.635883 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1128 04:50:50.647184 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1128 04:50:50.656491 1366947 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1128 04:50:50.785627 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1128 04:50:50.785651 1366947 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 538.548994ms
	I1128 04:50:50.785664 1366947 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  513.37 KiB / 287.99 MiB [] 0.17% ? p/s ?I1128 04:50:51.188977 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1128 04:50:51.189019 1366947 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 941.644002ms
	I1128 04:50:51.189032 1366947 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  13.86 MiB / 287.99 MiB [>] 4.81% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.18 MiB I1128 04:50:51.505712 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1128 04:50:51.505738 1366947 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.259267655s
	I1128 04:50:51.505752 1366947 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1128 04:50:51.561404 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1128 04:50:51.561433 1366947 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.314994925s
	I1128 04:50:51.561446 1366947 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.18 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 42.18 MiB I1128 04:50:52.011126 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1128 04:50:52.011210 1366947 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.764754237s
	I1128 04:50:52.011239 1366947 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  32.89 MiB / 287.99 MiB  11.42% 40.23 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.23 MiB    > gcr.io/k8s-minikube/kicbase...:  59.04 MiB / 287.99 MiB  20.50% 40.23 MiBI1128 04:50:52.591180 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1128 04:50:52.591221 1366947 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.344803588s
	I1128 04:50:52.591235 1366947 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 41.38 MiB    > gcr.io/k8s-minikube/kicbase...:  75.79 MiB / 287.99 MiB  26.32% 41.38 MiB    > gcr.io/k8s-minikube/kicbase...:  89.39 MiB / 287.99 MiB  31.04% 41.38 MiB    > gcr.io/k8s-minikube/kicbase...:  104.89 MiB / 287.99 MiB  36.42% 42.70 Mi    > gcr.io/k8s-minikube/kicbase...:  117.23 MiB / 287.99 MiB  40.71% 42.70 Mi    > gcr.io/k8s-minikube/kicbase...:  123.79 MiB / 287.99 MiB  42.99% 42.70 Mi    > gcr.io/k8s-minikube/kicbase...:  140.14 MiB / 287.99 MiB  48.66% 43.74 Mi    > gcr.io/k8s-minikube/kicbase...:  154.09 MiB / 287.99 MiB  53.51% 43.74 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 43.74 Mi    > gcr.io/k8s-minikube/kicbase...:  172.21 MiB / 287.99 MiB  59.80% 44.33 Mi    > gcr.io/k8s-minikube/kicbase...:  187.73 MiB / 287.99 MiB  65.18% 44.33 Mi    > gcr.io/k8s-minikube/kicbase...:  200.29 MiB / 287.99 MiB  69.55% 44.33 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.
81% 45.54 Mi    > gcr.io/k8s-minikube/kicbase...:  231.76 MiB / 287.99 MiB  80.48% 45.54 Mi    > gcr.io/k8s-minikube/kicbase...:  241.34 MiB / 287.99 MiB  83.80% 45.54 MiI1128 04:50:55.501612 1366947 cache.go:157] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1128 04:50:55.502690 1366947 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.255570947s
	I1128 04:50:55.502749 1366947 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1128 04:50:55.502781 1366947 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 48.55 Mi    > gcr.io/k8s-minikube/kicbase...:  268.04 MiB / 287.99 MiB  93.07% 48.55 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 48.55 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 47.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.88 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 44.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 44.79 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 44.79 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 41.90 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 43.52 MI1128 04:50:57.454191 1366947 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282
adea674ee67882f59f4f546e as a tarball
	I1128 04:50:57.454224 1366947 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1128 04:50:57.684538 1366947 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1128 04:50:57.684581 1366947 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:50:57.684638 1366947 start.go:365] acquiring machines lock for missing-upgrade-934743: {Name:mkdd2d12be2d09fc9750b919e1c771696637b275 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:57.684804 1366947 start.go:369] acquired machines lock for "missing-upgrade-934743" in 63.975µs
	I1128 04:50:57.684830 1366947 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:50:57.684845 1366947 fix.go:54] fixHost starting: 
	I1128 04:50:57.685196 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:57.702388 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:50:57.702475 1366947 fix.go:102] recreateIfNeeded on missing-upgrade-934743: state= err=unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:57.702495 1366947 fix.go:107] machineExists: false. err=machine does not exist
	I1128 04:50:57.713332 1366947 out.go:177] * docker "missing-upgrade-934743" container is missing, will recreate.
	I1128 04:50:57.722396 1366947 delete.go:124] DEMOLISHING missing-upgrade-934743 ...
	I1128 04:50:57.722510 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:57.741991 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	W1128 04:50:57.742057 1366947 stop.go:75] unable to get state: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:57.742073 1366947 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:57.742543 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:57.760979 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:50:57.761045 1366947 delete.go:82] Unable to get host status for missing-upgrade-934743, assuming it has already been deleted: state: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:57.761123 1366947 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-934743
	W1128 04:50:57.778034 1366947 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-934743 returned with exit code 1
	I1128 04:50:57.778090 1366947 kic.go:371] could not find the container missing-upgrade-934743 to remove it. will try anyways
	I1128 04:50:57.778158 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:57.796507 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	W1128 04:50:57.796563 1366947 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:57.796631 1366947 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-934743 /bin/bash -c "sudo init 0"
	W1128 04:50:57.820431 1366947 cli_runner.go:211] docker exec --privileged -t missing-upgrade-934743 /bin/bash -c "sudo init 0" returned with exit code 1
	I1128 04:50:57.820463 1366947 oci.go:650] error shutdown missing-upgrade-934743: docker exec --privileged -t missing-upgrade-934743 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:58.821549 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:58.839317 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:50:58.839391 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:58.839408 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:50:58.839445 1366947 retry.go:31] will retry after 495.50265ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:59.335124 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:59.352085 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:50:59.352148 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:59.352157 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:50:59.352202 1366947 retry.go:31] will retry after 507.932622ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:59.861053 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:50:59.881964 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:50:59.882028 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:50:59.882045 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:50:59.882069 1366947 retry.go:31] will retry after 1.062382181s: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:00.944860 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:51:00.968225 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:51:00.968298 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:00.968307 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:51:00.968333 1366947 retry.go:31] will retry after 2.336981188s: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:03.305590 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:51:03.322006 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:51:03.322068 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:03.322092 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:51:03.322122 1366947 retry.go:31] will retry after 2.188118821s: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:05.511267 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:51:05.528873 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:51:05.528935 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:05.528948 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:51:05.528979 1366947 retry.go:31] will retry after 2.924568792s: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:08.454221 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:51:08.475723 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:51:08.475785 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:08.475797 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:51:08.475833 1366947 retry.go:31] will retry after 6.108970301s: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:14.585086 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:51:14.601267 1366947 cli_runner.go:211] docker container inspect missing-upgrade-934743 --format={{.State.Status}} returned with exit code 1
	I1128 04:51:14.601332 1366947 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	I1128 04:51:14.601345 1366947 oci.go:664] temporary error: container missing-upgrade-934743 status is  but expect it to be exited
	I1128 04:51:14.601380 1366947 oci.go:88] couldn't shut down missing-upgrade-934743 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-934743": docker container inspect missing-upgrade-934743 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-934743
	 
	I1128 04:51:14.601444 1366947 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-934743
	I1128 04:51:14.617569 1366947 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-934743
	W1128 04:51:14.633802 1366947 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-934743 returned with exit code 1
	I1128 04:51:14.633897 1366947 cli_runner.go:164] Run: docker network inspect missing-upgrade-934743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:51:14.650633 1366947 cli_runner.go:164] Run: docker network rm missing-upgrade-934743
	I1128 04:51:14.748930 1366947 fix.go:114] Sleeping 1 second for extra luck!
	I1128 04:51:15.749093 1366947 start.go:125] createHost starting for "" (driver="docker")
	I1128 04:51:15.751493 1366947 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1128 04:51:15.751652 1366947 start.go:159] libmachine.API.Create for "missing-upgrade-934743" (driver="docker")
	I1128 04:51:15.751681 1366947 client.go:168] LocalClient.Create starting
	I1128 04:51:15.751775 1366947 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:51:15.751816 1366947 main.go:141] libmachine: Decoding PEM data...
	I1128 04:51:15.751839 1366947 main.go:141] libmachine: Parsing certificate...
	I1128 04:51:15.751901 1366947 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:51:15.751929 1366947 main.go:141] libmachine: Decoding PEM data...
	I1128 04:51:15.751946 1366947 main.go:141] libmachine: Parsing certificate...
	I1128 04:51:15.752215 1366947 cli_runner.go:164] Run: docker network inspect missing-upgrade-934743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 04:51:15.778416 1366947 cli_runner.go:211] docker network inspect missing-upgrade-934743 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 04:51:15.778494 1366947 network_create.go:281] running [docker network inspect missing-upgrade-934743] to gather additional debugging logs...
	I1128 04:51:15.778516 1366947 cli_runner.go:164] Run: docker network inspect missing-upgrade-934743
	W1128 04:51:15.795605 1366947 cli_runner.go:211] docker network inspect missing-upgrade-934743 returned with exit code 1
	I1128 04:51:15.795642 1366947 network_create.go:284] error running [docker network inspect missing-upgrade-934743]: docker network inspect missing-upgrade-934743: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-934743 not found
	I1128 04:51:15.795656 1366947 network_create.go:286] output of [docker network inspect missing-upgrade-934743]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-934743 not found
	
	** /stderr **
	I1128 04:51:15.795769 1366947 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:51:15.818261 1366947 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-457410d7183c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:60:a5:a2:7c} reservation:<nil>}
	I1128 04:51:15.818614 1366947 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0d78a22dd546 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bd:04:fe:9e} reservation:<nil>}
	I1128 04:51:15.818958 1366947 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-686ec87fec55 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a3:ec:05:d2} reservation:<nil>}
	I1128 04:51:15.820111 1366947 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40036dc570}
	I1128 04:51:15.820179 1366947 network_create.go:124] attempt to create docker network missing-upgrade-934743 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1128 04:51:15.820260 1366947 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-934743 missing-upgrade-934743
	I1128 04:51:15.893554 1366947 network_create.go:108] docker network missing-upgrade-934743 192.168.76.0/24 created
	I1128 04:51:15.893593 1366947 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-934743" container
	I1128 04:51:15.893671 1366947 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:51:15.910401 1366947 cli_runner.go:164] Run: docker volume create missing-upgrade-934743 --label name.minikube.sigs.k8s.io=missing-upgrade-934743 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:51:15.926475 1366947 oci.go:103] Successfully created a docker volume missing-upgrade-934743
	I1128 04:51:15.926573 1366947 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-934743-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-934743 --entrypoint /usr/bin/test -v missing-upgrade-934743:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1128 04:51:16.451049 1366947 oci.go:107] Successfully prepared a docker volume missing-upgrade-934743
	I1128 04:51:16.451085 1366947 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1128 04:51:16.451262 1366947 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:51:16.451386 1366947 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:51:16.534124 1366947 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-934743 --name missing-upgrade-934743 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-934743 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-934743 --network missing-upgrade-934743 --ip 192.168.76.2 --volume missing-upgrade-934743:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1128 04:51:16.922927 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Running}}
	I1128 04:51:16.954844 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	I1128 04:51:16.980818 1366947 cli_runner.go:164] Run: docker exec missing-upgrade-934743 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:51:17.051859 1366947 oci.go:144] the created container "missing-upgrade-934743" has a running status.
	I1128 04:51:17.051894 1366947 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa...
	I1128 04:51:17.221647 1366947 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 04:51:17.257926 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	I1128 04:51:17.290330 1366947 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 04:51:17.290361 1366947 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-934743 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 04:51:17.393051 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	I1128 04:51:17.429650 1366947 machine.go:88] provisioning docker machine ...
	I1128 04:51:17.429684 1366947 ubuntu.go:169] provisioning hostname "missing-upgrade-934743"
	I1128 04:51:17.429754 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:17.456259 1366947 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:17.457140 1366947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1128 04:51:17.457161 1366947 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-934743 && echo "missing-upgrade-934743" | sudo tee /etc/hostname
	I1128 04:51:17.461219 1366947 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54174->127.0.0.1:34478: read: connection reset by peer
	I1128 04:51:20.636192 1366947 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-934743
	
	I1128 04:51:20.636324 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:20.668758 1366947 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:20.669166 1366947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1128 04:51:20.669184 1366947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-934743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-934743/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-934743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:51:20.825967 1366947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:51:20.826052 1366947 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:51:20.826085 1366947 ubuntu.go:177] setting up certificates
	I1128 04:51:20.826128 1366947 provision.go:83] configureAuth start
	I1128 04:51:20.826221 1366947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-934743
	I1128 04:51:20.845956 1366947 provision.go:138] copyHostCerts
	I1128 04:51:20.846016 1366947 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:51:20.846025 1366947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:51:20.846103 1366947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:51:20.846209 1366947 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:51:20.846214 1366947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:51:20.846241 1366947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:51:20.846299 1366947 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:51:20.846304 1366947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:51:20.846327 1366947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:51:20.846367 1366947 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-934743 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-934743]
	I1128 04:51:21.689402 1366947 provision.go:172] copyRemoteCerts
	I1128 04:51:21.689496 1366947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:51:21.689564 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:21.734129 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:21.845052 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 04:51:21.888351 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:51:21.949094 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 04:51:21.992629 1366947 provision.go:86] duration metric: configureAuth took 1.166469716s
	I1128 04:51:21.992677 1366947 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:51:21.992876 1366947 config.go:182] Loaded profile config "missing-upgrade-934743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:51:21.992994 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:22.028395 1366947 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:22.028859 1366947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1128 04:51:22.028881 1366947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:51:22.601475 1366947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:51:22.601502 1366947 machine.go:91] provisioned docker machine in 5.171829718s
	I1128 04:51:22.601515 1366947 client.go:171] LocalClient.Create took 6.849820802s
	I1128 04:51:22.601527 1366947 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-934743" took 6.849876375s
	I1128 04:51:22.601537 1366947 start.go:300] post-start starting for "missing-upgrade-934743" (driver="docker")
	I1128 04:51:22.601552 1366947 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:51:22.601622 1366947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:51:22.601666 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:22.638080 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:22.761092 1366947 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:51:22.765680 1366947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:51:22.765709 1366947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:51:22.765720 1366947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:51:22.765728 1366947 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1128 04:51:22.765739 1366947 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:51:22.765800 1366947 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:51:22.765893 1366947 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:51:22.765999 1366947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:51:22.783383 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:51:22.824464 1366947 start.go:303] post-start completed in 222.906846ms
	I1128 04:51:22.824914 1366947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-934743
	I1128 04:51:22.860355 1366947 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/missing-upgrade-934743/config.json ...
	I1128 04:51:22.860677 1366947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:51:22.860723 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:22.891433 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:23.010038 1366947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:51:23.025293 1366947 start.go:128] duration metric: createHost completed in 7.276162552s
	I1128 04:51:23.025386 1366947 cli_runner.go:164] Run: docker container inspect missing-upgrade-934743 --format={{.State.Status}}
	W1128 04:51:23.073346 1366947 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:51:23.073378 1366947 machine.go:88] provisioning docker machine ...
	I1128 04:51:23.073397 1366947 ubuntu.go:169] provisioning hostname "missing-upgrade-934743"
	I1128 04:51:23.073464 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:23.112871 1366947 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:23.113339 1366947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1128 04:51:23.113358 1366947 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-934743 && echo "missing-upgrade-934743" | sudo tee /etc/hostname
	I1128 04:51:23.317401 1366947 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-934743
	
	I1128 04:51:23.317523 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:23.367115 1366947 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:23.367520 1366947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1128 04:51:23.367540 1366947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-934743' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-934743/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-934743' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:51:23.533838 1366947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:51:23.533870 1366947 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:51:23.533923 1366947 ubuntu.go:177] setting up certificates
	I1128 04:51:23.533940 1366947 provision.go:83] configureAuth start
	I1128 04:51:23.534013 1366947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-934743
	I1128 04:51:23.577873 1366947 provision.go:138] copyHostCerts
	I1128 04:51:23.577946 1366947 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:51:23.577962 1366947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:51:23.578042 1366947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:51:23.578151 1366947 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:51:23.578163 1366947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:51:23.578190 1366947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:51:23.578253 1366947 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:51:23.578264 1366947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:51:23.578288 1366947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:51:23.578345 1366947 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-934743 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-934743]
	I1128 04:51:23.964288 1366947 provision.go:172] copyRemoteCerts
	I1128 04:51:23.964362 1366947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:51:23.964412 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:23.983160 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:24.087367 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:51:24.123249 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 04:51:24.167080 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 04:51:24.206656 1366947 provision.go:86] duration metric: configureAuth took 672.700029ms
	I1128 04:51:24.206683 1366947 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:51:24.206862 1366947 config.go:182] Loaded profile config "missing-upgrade-934743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:51:24.206984 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:24.228946 1366947 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:24.229349 1366947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1128 04:51:24.229371 1366947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:51:24.669152 1366947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:51:24.669179 1366947 machine.go:91] provisioned docker machine in 1.595793179s
	I1128 04:51:24.669189 1366947 start.go:300] post-start starting for "missing-upgrade-934743" (driver="docker")
	I1128 04:51:24.669200 1366947 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:51:24.669268 1366947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:51:24.669312 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:24.693757 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:24.816401 1366947 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:51:24.820518 1366947 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:51:24.820540 1366947 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:51:24.820552 1366947 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:51:24.820559 1366947 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1128 04:51:24.820569 1366947 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:51:24.820625 1366947 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:51:24.820719 1366947 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:51:24.820824 1366947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:51:24.831110 1366947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:51:24.877214 1366947 start.go:303] post-start completed in 208.007558ms
	I1128 04:51:24.877364 1366947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:51:24.877442 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:24.918038 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:25.035137 1366947 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:51:25.044903 1366947 fix.go:56] fixHost completed within 27.360048798s
	I1128 04:51:25.044925 1366947 start.go:83] releasing machines lock for "missing-upgrade-934743", held for 27.360108253s
	I1128 04:51:25.044995 1366947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-934743
	I1128 04:51:25.073874 1366947 ssh_runner.go:195] Run: cat /version.json
	I1128 04:51:25.073945 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:25.076746 1366947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:51:25.076820 1366947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-934743
	I1128 04:51:25.120761 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	I1128 04:51:25.129951 1366947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/missing-upgrade-934743/id_rsa Username:docker}
	W1128 04:51:25.225502 1366947 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 04:51:25.225648 1366947 ssh_runner.go:195] Run: systemctl --version
	I1128 04:51:25.311531 1366947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:51:25.460132 1366947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:51:25.467061 1366947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:51:25.527468 1366947 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:51:25.527547 1366947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:51:25.601409 1366947 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:51:25.601433 1366947 start.go:472] detecting cgroup driver to use...
	I1128 04:51:25.601465 1366947 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:51:25.601518 1366947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:51:25.630651 1366947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:51:25.643810 1366947 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:51:25.643877 1366947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:51:25.656699 1366947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:51:25.669826 1366947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 04:51:25.682847 1366947 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 04:51:25.682912 1366947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:51:25.862952 1366947 docker.go:219] disabling docker service ...
	I1128 04:51:25.863021 1366947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:51:25.881864 1366947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:51:25.899193 1366947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:51:26.062239 1366947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:51:26.240256 1366947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:51:26.259237 1366947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:51:26.299649 1366947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1128 04:51:26.299782 1366947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:51:26.332969 1366947 out.go:177] 
	W1128 04:51:26.334854 1366947 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 04:51:26.335044 1366947 out.go:239] * 
	* 
	W1128 04:51:26.336115 1366947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 04:51:26.338462 1366947 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-934743 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-11-28 04:51:26.404712667 +0000 UTC m=+2317.754170152
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-934743
helpers_test.go:235: (dbg) docker inspect missing-upgrade-934743:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dbc658c7a7279a3ca8d33ca447da9f319e5e92dbd66b4c7643678a533be3be64",
	        "Created": "2023-11-28T04:51:16.550602236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1368410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:51:16.912965606Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/dbc658c7a7279a3ca8d33ca447da9f319e5e92dbd66b4c7643678a533be3be64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dbc658c7a7279a3ca8d33ca447da9f319e5e92dbd66b4c7643678a533be3be64/hostname",
	        "HostsPath": "/var/lib/docker/containers/dbc658c7a7279a3ca8d33ca447da9f319e5e92dbd66b4c7643678a533be3be64/hosts",
	        "LogPath": "/var/lib/docker/containers/dbc658c7a7279a3ca8d33ca447da9f319e5e92dbd66b4c7643678a533be3be64/dbc658c7a7279a3ca8d33ca447da9f319e5e92dbd66b4c7643678a533be3be64-json.log",
	        "Name": "/missing-upgrade-934743",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-934743:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-934743",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c4265b909a2bc2c6086130953e636788b578deff0c5f1dbb27ab58986e290558-init/diff:/var/lib/docker/overlay2/a3ee60eef3eaaf47337b6f0781539ad15febb1ee267fc9f2e5886c941d64e816/diff:/var/lib/docker/overlay2/6faef0ac3025868a9d1e3f52b26728dabcf216dae6b126bb824c95a290e996a3/diff:/var/lib/docker/overlay2/bae46008e5104c3a558d0689ad17cdac59f3808b90523a03801420113dd6a6ff/diff:/var/lib/docker/overlay2/6140e220b6c231fb7ffc53b234ca59e99a1059414845fc59284de9fa015da70a/diff:/var/lib/docker/overlay2/7281f15c2802e99717d15806f1695eb3a617fb6bd00903e495db69929792cdc8/diff:/var/lib/docker/overlay2/787bb9098c14bf34ee5c05b61d350ebeeb4f0e850a79657d271297feec01e693/diff:/var/lib/docker/overlay2/ae0d756a3da693bc34927ae7a0673637f53723b7f7572048238233d7efb77775/diff:/var/lib/docker/overlay2/a409c5e87bb1c4eb9849ccc24d2f328f782fc44d17214ec3578f8ae398d113c3/diff:/var/lib/docker/overlay2/d803bb675aa339e7d11bf12210863d75020893dc8f321ccf1dfb0ecf20ab52c7/diff:/var/lib/docker/overlay2/43dbfd
fa3b1b2ddc4c720b3f78ef9ac0541a4b79cbaa133e9e96c4bdce060d3d/diff:/var/lib/docker/overlay2/2913a2498b6a1dc1dfc2a57622cfa3b280b39f60170ddeb9a55a52d167bc4c74/diff:/var/lib/docker/overlay2/cd7c7c98a9b8f2ca95349723b99ebfe24242f66e9ebd48e6cb9bc4fbbb2ad555/diff:/var/lib/docker/overlay2/4ced315dfeb0f6fc844777a9aef6ea392de2b9800f3fbed0f7b5c7b37904d066/diff:/var/lib/docker/overlay2/cc42392a5860c1b8fce1ce24369e38d055c4dba573843c14d4bc0fcbda34ca5e/diff:/var/lib/docker/overlay2/ac3a4791e967449af5a1bc73d1c7a165768596fd25ebe5f2eebc9d435599f37b/diff:/var/lib/docker/overlay2/7c2ae595dbad21977f810eca2c983ff499a1acdbce437b00e36e51c085ea5d41/diff:/var/lib/docker/overlay2/f7a51a696d4478d24ea5eab6e78ac97a4000766fde90b5807d944ba36019aff5/diff:/var/lib/docker/overlay2/a0496365ae9ef02a50c6f4cad612c3dfb94161b67a176d3abe7baae7bda7b0a7/diff:/var/lib/docker/overlay2/aa47576e74f82cae7b41ceeca1c8ae2b3e6ee4593231618f52e3049f84140efb/diff:/var/lib/docker/overlay2/7eb68b758d5012be6905052b3d7cdcd8e3cefabb79a26a8dd1b0018b1914551f/diff:/var/lib/d
ocker/overlay2/0e7a0288fc9f7a08b4d153b12e43182978239d48dd3e0f6a570c0bae14247f10/diff:/var/lib/docker/overlay2/1ffb8892e639fde2d59e27476878f99ef6d44ec92a2cb517fe6cc028f6bf18da/diff:/var/lib/docker/overlay2/dcbcc5fb9f4277eb5d5eac20a3f16c7e4b2929ee1cdd4b423771fb46a5d785e0/diff:/var/lib/docker/overlay2/1cd16cc670e378085eb93c13e6834d7263735f8f1999d896d76044f4941b8842/diff:/var/lib/docker/overlay2/3d917e0c4a26ccde625fabe74f8d60018eefd5fb3a70fa669136678ad99044b7/diff:/var/lib/docker/overlay2/5bea8da80513af43768be0ce4d92065d77df41408cb39bed1a25b308da2a805b/diff:/var/lib/docker/overlay2/ebd98219a25924c60a314e406ea93fac96e34f50d7b565d2ac4753d8167070a5/diff:/var/lib/docker/overlay2/d56b112a682d6470cfcee01537dca205b00b3d3eb4a757d4a392b541fc2cce90/diff:/var/lib/docker/overlay2/7826f8248992b4c2dfd9e2b03ddb5743ca2df0de6281c3cea2fcd75ed1c1e9e0/diff:/var/lib/docker/overlay2/12eeb5b41d19ac8b0bedb4493000a75fcf2db98afc7b511db8e5f878a0f2ab63/diff:/var/lib/docker/overlay2/b3120744803006afe3d39cd77a65397c78835da34d22f02983d25752b9f
0c34b/diff:/var/lib/docker/overlay2/2845f1cb55f870d0d6d4d053790459015a538c801a2ddccc73e6134167fbcae8/diff:/var/lib/docker/overlay2/3741bdb7a1192cc59cd93243be0134c2e95d0f2db2e44810917eb3f7f5e98554/diff:/var/lib/docker/overlay2/e64f4c27fa6b5e19206b3e136b71cf5ece4ce5d0318db36570de79cb2d983a9e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c4265b909a2bc2c6086130953e636788b578deff0c5f1dbb27ab58986e290558/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c4265b909a2bc2c6086130953e636788b578deff0c5f1dbb27ab58986e290558/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c4265b909a2bc2c6086130953e636788b578deff0c5f1dbb27ab58986e290558/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-934743",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-934743/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-934743",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-934743",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-934743",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bace57320f176a883bfd1cca4173f950385090df571a3196c58c6d4abc7968d1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34477"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34474"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34476"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34475"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bace57320f17",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-934743": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dbc658c7a727",
	                        "missing-upgrade-934743"
	                    ],
	                    "NetworkID": "56658575e4ecf40fba7073c32c981a83a269cbee47355e6c028e455a8918c43f",
	                    "EndpointID": "6a69369d2ec5b2e5a18f0e25fe03ce95cf2983734797e91ce68158ba917e1f9f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-934743 -n missing-upgrade-934743
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-934743 -n missing-upgrade-934743: exit status 6 (600.695942ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:51:27.057647 1369708 status.go:415] kubeconfig endpoint: got: 192.168.59.88:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-934743" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-934743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-934743
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-934743: (2.356410641s)
--- FAIL: TestMissingContainerUpgrade (137.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-143970 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-143970 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.402273001s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-143970] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-143970 in cluster pause-143970
	* Pulling base image ...
	* Updating the running docker "pause-143970" container ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-143970" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:50:46.741407 1366652 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:50:46.741668 1366652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:50:46.741713 1366652 out.go:309] Setting ErrFile to fd 2...
	I1128 04:50:46.741732 1366652 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:50:46.742191 1366652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:50:46.742663 1366652 out.go:303] Setting JSON to false
	I1128 04:50:46.744738 1366652 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27182,"bootTime":1701119865,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:50:46.744848 1366652 start.go:138] virtualization:  
	I1128 04:50:46.747491 1366652 out.go:177] * [pause-143970] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:50:46.750271 1366652 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:50:46.750805 1366652 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1128 04:50:46.753829 1366652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:50:46.755678 1366652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:50:46.757455 1366652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:50:46.759131 1366652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:50:46.760703 1366652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:50:46.750860 1366652 notify.go:220] Checking for updates...
	I1128 04:50:46.762989 1366652 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:50:46.764170 1366652 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:50:46.807289 1366652 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:50:46.807405 1366652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:50:46.944073 1366652 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1128 04:50:46.961725 1366652 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-28 04:50:46.951716645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:50:46.961838 1366652 docker.go:295] overlay module found
	I1128 04:50:46.963590 1366652 out.go:177] * Using the docker driver based on existing profile
	I1128 04:50:46.965119 1366652 start.go:298] selected driver: docker
	I1128 04:50:46.965139 1366652 start.go:902] validating driver "docker" against &{Name:pause-143970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-143970 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:50:46.965290 1366652 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:50:46.965405 1366652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:50:47.036858 1366652 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-28 04:50:47.026427876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:50:47.037327 1366652 cni.go:84] Creating CNI manager for ""
	I1128 04:50:47.037344 1366652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:50:47.037358 1366652 start_flags.go:323] config:
	{Name:pause-143970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-143970 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:50:47.039637 1366652 out.go:177] * Starting control plane node pause-143970 in cluster pause-143970
	I1128 04:50:47.041505 1366652 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:50:47.043615 1366652 out.go:177] * Pulling base image ...
	I1128 04:50:47.045544 1366652 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:50:47.045599 1366652 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1128 04:50:47.045612 1366652 cache.go:56] Caching tarball of preloaded images
	I1128 04:50:47.045632 1366652 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:50:47.045692 1366652 preload.go:174] Found /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 04:50:47.045702 1366652 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:50:47.045835 1366652 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/config.json ...
	I1128 04:50:47.069639 1366652 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 04:50:47.069666 1366652 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 04:50:47.069690 1366652 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:50:47.069740 1366652 start.go:365] acquiring machines lock for pause-143970: {Name:mkfb88b061b2c8b0aad59981daaebf664f443127 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:50:47.069831 1366652 start.go:369] acquired machines lock for "pause-143970" in 63.154µs
	I1128 04:50:47.069851 1366652 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:50:47.069857 1366652 fix.go:54] fixHost starting: 
	I1128 04:50:47.070154 1366652 cli_runner.go:164] Run: docker container inspect pause-143970 --format={{.State.Status}}
	I1128 04:50:47.088157 1366652 fix.go:102] recreateIfNeeded on pause-143970: state=Running err=<nil>
	W1128 04:50:47.088186 1366652 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:50:47.090603 1366652 out.go:177] * Updating the running docker "pause-143970" container ...
	I1128 04:50:47.092266 1366652 machine.go:88] provisioning docker machine ...
	I1128 04:50:47.092315 1366652 ubuntu.go:169] provisioning hostname "pause-143970"
	I1128 04:50:47.092451 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:47.111063 1366652 main.go:141] libmachine: Using SSH client type: native
	I1128 04:50:47.111765 1366652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1128 04:50:47.111795 1366652 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-143970 && echo "pause-143970" | sudo tee /etc/hostname
	I1128 04:50:47.282170 1366652 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-143970
	
	I1128 04:50:47.282284 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:47.303231 1366652 main.go:141] libmachine: Using SSH client type: native
	I1128 04:50:47.303642 1366652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1128 04:50:47.303660 1366652 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-143970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-143970/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-143970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:50:47.440989 1366652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:50:47.441023 1366652 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:50:47.441042 1366652 ubuntu.go:177] setting up certificates
	I1128 04:50:47.441054 1366652 provision.go:83] configureAuth start
	I1128 04:50:47.441121 1366652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-143970
	I1128 04:50:47.460626 1366652 provision.go:138] copyHostCerts
	I1128 04:50:47.460761 1366652 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:50:47.460791 1366652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:50:47.460872 1366652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:50:47.460988 1366652 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:50:47.460999 1366652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:50:47.461027 1366652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:50:47.461092 1366652 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:50:47.461101 1366652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:50:47.461126 1366652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:50:47.461184 1366652 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.pause-143970 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-143970]
	I1128 04:50:48.168746 1366652 provision.go:172] copyRemoteCerts
	I1128 04:50:48.168818 1366652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:50:48.168866 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:48.200166 1366652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/pause-143970/id_rsa Username:docker}
	I1128 04:50:48.300646 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:50:48.350806 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1128 04:50:48.579519 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:50:48.815417 1366652 provision.go:86] duration metric: configureAuth took 1.374349747s
	I1128 04:50:48.815494 1366652 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:50:48.815779 1366652 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:50:48.815950 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:48.959867 1366652 main.go:141] libmachine: Using SSH client type: native
	I1128 04:50:48.960279 1366652 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34469 <nil> <nil>}
	I1128 04:50:48.960295 1366652 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:50:54.864400 1366652 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:50:54.864442 1366652 machine.go:91] provisioned docker machine in 7.772143329s
	I1128 04:50:54.864452 1366652 start.go:300] post-start starting for "pause-143970" (driver="docker")
	I1128 04:50:54.864463 1366652 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:50:54.864532 1366652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:50:54.864579 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:54.890431 1366652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/pause-143970/id_rsa Username:docker}
	I1128 04:50:54.987772 1366652 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:50:54.992549 1366652 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:50:54.992585 1366652 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:50:54.992596 1366652 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:50:54.992603 1366652 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 04:50:54.992613 1366652 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:50:54.992714 1366652 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:50:54.992795 1366652 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:50:54.992902 1366652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:50:55.004208 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:50:55.047813 1366652 start.go:303] post-start completed in 183.318166ms
	I1128 04:50:55.047922 1366652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:50:55.048005 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:55.079612 1366652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/pause-143970/id_rsa Username:docker}
	I1128 04:50:55.199301 1366652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:50:55.208472 1366652 fix.go:56] fixHost completed within 8.138608519s
	I1128 04:50:55.208496 1366652 start.go:83] releasing machines lock for "pause-143970", held for 8.138657299s
	I1128 04:50:55.208577 1366652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-143970
	I1128 04:50:55.234701 1366652 ssh_runner.go:195] Run: cat /version.json
	I1128 04:50:55.234753 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:55.234999 1366652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:50:55.235044 1366652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-143970
	I1128 04:50:55.274740 1366652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/pause-143970/id_rsa Username:docker}
	I1128 04:50:55.289890 1366652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34469 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/pause-143970/id_rsa Username:docker}
	I1128 04:50:55.365547 1366652 ssh_runner.go:195] Run: systemctl --version
	I1128 04:50:55.597187 1366652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:50:55.905712 1366652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:50:55.912937 1366652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:50:55.934248 1366652 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:50:55.934409 1366652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:50:55.953285 1366652 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 04:50:55.953312 1366652 start.go:472] detecting cgroup driver to use...
	I1128 04:50:55.953345 1366652 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:50:55.953394 1366652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:50:55.980348 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:50:56.005802 1366652 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:50:56.005900 1366652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:50:56.027454 1366652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:50:56.042897 1366652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:50:56.177536 1366652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:50:56.299590 1366652 docker.go:219] disabling docker service ...
	I1128 04:50:56.299658 1366652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:50:56.316111 1366652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:50:56.331813 1366652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:50:56.459365 1366652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:50:56.588466 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:50:56.603836 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:50:56.723066 1366652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:50:56.723137 1366652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:50:56.777550 1366652 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:50:56.777619 1366652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:50:56.828178 1366652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:50:56.858037 1366652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:50:56.889058 1366652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:50:56.934421 1366652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:50:56.958629 1366652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:50:57.000402 1366652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:50:57.287678 1366652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:51:07.526380 1366652 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.238670824s)
	I1128 04:51:07.526414 1366652 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:51:07.526465 1366652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:51:07.531407 1366652 start.go:540] Will wait 60s for crictl version
	I1128 04:51:07.531474 1366652 ssh_runner.go:195] Run: which crictl
	I1128 04:51:07.536361 1366652 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:51:07.582148 1366652 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 04:51:07.582250 1366652 ssh_runner.go:195] Run: crio --version
	I1128 04:51:07.680318 1366652 ssh_runner.go:195] Run: crio --version
	I1128 04:51:07.763817 1366652 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.6 ...
	I1128 04:51:07.765704 1366652 cli_runner.go:164] Run: docker network inspect pause-143970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:51:07.794899 1366652 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1128 04:51:07.802184 1366652 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:51:07.802248 1366652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:51:07.865888 1366652 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:51:07.865912 1366652 crio.go:415] Images already preloaded, skipping extraction
	I1128 04:51:07.865968 1366652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:51:07.909808 1366652 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:51:07.909832 1366652 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:51:07.909907 1366652 ssh_runner.go:195] Run: crio config
	I1128 04:51:07.974836 1366652 cni.go:84] Creating CNI manager for ""
	I1128 04:51:07.974864 1366652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:51:07.974904 1366652 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:51:07.974931 1366652 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-143970 NodeName:pause-143970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:51:07.975115 1366652 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-143970"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:51:07.975205 1366652 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-143970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-143970 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:51:07.975277 1366652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:51:07.986875 1366652 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:51:07.986949 1366652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:51:07.997795 1366652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1128 04:51:08.023905 1366652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:51:08.047495 1366652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1128 04:51:08.071809 1366652 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1128 04:51:08.077077 1366652 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970 for IP: 192.168.67.2
	I1128 04:51:08.077116 1366652 certs.go:190] acquiring lock for shared ca certs: {Name:mka7cf71bac87c390cad9bb03b67c849db7103ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:08.077305 1366652 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key
	I1128 04:51:08.077390 1366652 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key
	I1128 04:51:08.077592 1366652 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.key
	I1128 04:51:08.078048 1366652 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/apiserver.key.c7fa3a9e
	I1128 04:51:08.078495 1366652 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/proxy-client.key
	I1128 04:51:08.078662 1366652 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem (1338 bytes)
	W1128 04:51:08.078697 1366652 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415_empty.pem, impossibly tiny 0 bytes
	I1128 04:51:08.078712 1366652 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:51:08.078742 1366652 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem (1082 bytes)
	I1128 04:51:08.078775 1366652 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:51:08.078804 1366652 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem (1679 bytes)
	I1128 04:51:08.078854 1366652 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:51:08.079583 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:51:08.110888 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:51:08.140838 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:51:08.170993 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 04:51:08.200806 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:51:08.230494 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:51:08.260065 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:51:08.290093 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1128 04:51:08.320394 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /usr/share/ca-certificates/12614152.pem (1708 bytes)
	I1128 04:51:08.350241 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:51:08.380213 1366652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/1261415.pem --> /usr/share/ca-certificates/1261415.pem (1338 bytes)
	I1128 04:51:08.409138 1366652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:51:08.430864 1366652 ssh_runner.go:195] Run: openssl version
	I1128 04:51:08.439559 1366652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12614152.pem && ln -fs /usr/share/ca-certificates/12614152.pem /etc/ssl/certs/12614152.pem"
	I1128 04:51:08.452708 1366652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12614152.pem
	I1128 04:51:08.458169 1366652 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 04:21 /usr/share/ca-certificates/12614152.pem
	I1128 04:51:08.458244 1366652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12614152.pem
	I1128 04:51:08.467448 1366652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12614152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:51:08.480159 1366652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:51:08.491792 1366652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:51:08.496713 1366652 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 04:13 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:51:08.496797 1366652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:51:08.505802 1366652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:51:08.516994 1366652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1261415.pem && ln -fs /usr/share/ca-certificates/1261415.pem /etc/ssl/certs/1261415.pem"
	I1128 04:51:08.529263 1366652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1261415.pem
	I1128 04:51:08.533904 1366652 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 04:21 /usr/share/ca-certificates/1261415.pem
	I1128 04:51:08.533968 1366652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1261415.pem
	I1128 04:51:08.542690 1366652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1261415.pem /etc/ssl/certs/51391683.0"
	I1128 04:51:08.554080 1366652 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:51:08.558908 1366652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:51:08.567566 1366652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:51:08.576250 1366652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:51:08.584877 1366652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:51:08.594800 1366652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:51:08.603966 1366652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:51:08.612744 1366652 kubeadm.go:404] StartCluster: {Name:pause-143970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-143970 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:51:08.612874 1366652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:51:08.612935 1366652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:51:08.658800 1366652 cri.go:89] found id: "194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708"
	I1128 04:51:08.658833 1366652 cri.go:89] found id: "b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868"
	I1128 04:51:08.658841 1366652 cri.go:89] found id: "b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f"
	I1128 04:51:08.658847 1366652 cri.go:89] found id: "74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee"
	I1128 04:51:08.658851 1366652 cri.go:89] found id: "8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0"
	I1128 04:51:08.658855 1366652 cri.go:89] found id: "323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20"
	I1128 04:51:08.658860 1366652 cri.go:89] found id: "16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e"
	I1128 04:51:08.658864 1366652 cri.go:89] found id: ""
	I1128 04:51:08.658920 1366652 ssh_runner.go:195] Run: sudo runc list -f json
	I1128 04:51:08.700140 1366652 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e/userdata","rootfs":"/var/lib/containers/storage/overlay/605fbf0d835813cb30bf18751521ee2566517ef09a980b0c3b70b0da9e682712/merged","created":"2023-11-28T04:50:56.906211563Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b90b411b","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-
o.Annotations":"{\"io.kubernetes.container.hash\":\"b90b411b\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.62321051Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kuber
netes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-mxxmx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-mxxmx_dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/605fbf0d835813cb30bf18751521ee2566517ef09a980b0c3b70b0da9e682712/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-mxxmx_kube-system_dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f7b41abe2e8d557d9cf90ace79dce20ab688613318ff481a2cd638503ca0644b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f7b41abe2e8d557d9cf90ace7
9dce20ab688613318ff481a2cd638503ca0644b","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-mxxmx_kube-system_dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5/containers/coredns/05cc48be\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/servi
ceaccount\",\"host_path\":\"/var/lib/kubelet/pods/dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5/volumes/kubernetes.io~projected/kube-api-access-zvdkj\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-mxxmx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5","kubernetes.io/config.seen":"2023-11-28T04:50:43.031718531Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708/userdata","rootfs":"/var/lib/containers/storage/overlay/515b3f013a01831c353eb8da7310d372bab0f1cccea087e37d5ac4ec3931a215/merged","created":"2023-11-28T04:50:56.918808487Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e1
639c7a","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e1639c7a\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.777556698Z","io.kubernetes.cri-o.Image":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.4","io.kubernetes.cri-o.ImageRef":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c
54","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-143970\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"71e3ebb7196c2fd44a249ae1678cdac0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-143970_71e3ebb7196c2fd44a249ae1678cdac0/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/515b3f013a01831c353eb8da7310d372bab0f1cccea087e37d5ac4ec3931a215/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-143970_kube-system_71e3ebb7196c2fd44a249ae1678cdac0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7ded19db045a158d22eda7a728d301250bda2297d1a432e0d95045e2b2bd14f8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7ded19db045a158d22eda7a728d301250bda2297d1a432e0d95045e2b2bd14f8","io.kubernetes.cri-
o.SandboxName":"k8s_kube-scheduler-pause-143970_kube-system_71e3ebb7196c2fd44a249ae1678cdac0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/71e3ebb7196c2fd44a249ae1678cdac0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/71e3ebb7196c2fd44a249ae1678cdac0/containers/kube-scheduler/812c60e5\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-143970","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"71
e3ebb7196c2fd44a249ae1678cdac0","kubernetes.io/config.hash":"71e3ebb7196c2fd44a249ae1678cdac0","kubernetes.io/config.seen":"2023-11-28T04:49:44.103453163Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20/userdata","rootfs":"/var/lib/containers/storage/overlay/e7a6e4c16af7512f1a8c1c521509476358a49f475f4827ea31c0042c38a04367/merged","created":"2023-11-28T04:50:56.904932111Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c671fe91","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c671fe91\",\"io.kubernetes.co
ntainer.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.632557933Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-143970\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ec410fbd310e70e80aa386749c4354c1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-143970_ec410fbd310e70e80aa386749
c4354c1/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e7a6e4c16af7512f1a8c1c521509476358a49f475f4827ea31c0042c38a04367/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-143970_kube-system_ec410fbd310e70e80aa386749c4354c1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ec39115b318ddb0a5896617daccd0ac773718808801a473ad3ab0e96d9875fd8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ec39115b318ddb0a5896617daccd0ac773718808801a473ad3ab0e96d9875fd8","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-143970_kube-system_ec410fbd310e70e80aa386749c4354c1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ec410fbd310e70e80aa386749c4354c1/etc-hosts\",\"readonly\":false,
\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ec410fbd310e70e80aa386749c4354c1/containers/etcd/44ab86e7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-143970","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ec410fbd310e70e80aa386749c4354c1","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"ec410fbd310e70e80aa386749c4354c1","kubernetes.io/config.seen":"2023-11-28T04:49:44.103454607Z","kubernetes.io/config.source":"file"},"owner":"root"},{
"ociVersion":"1.0.2-dev","id":"74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee/userdata","rootfs":"/var/lib/containers/storage/overlay/6cf36f804c650d81c437fb8d0ae600db6e5efb5845ed2d355f9dbd5dbf222311/merged","created":"2023-11-28T04:50:56.920508901Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7c1c6ab0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7c1c6ab0\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGra
cePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.690311516Z","io.kubernetes.cri-o.Image":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.4","io.kubernetes.cri-o.ImageRef":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-143970\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"59f0a95d7161574212b71d16855f8aba\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-143970_59f0a95d7161574212b71d16855f8aba/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/
storage/overlay/6cf36f804c650d81c437fb8d0ae600db6e5efb5845ed2d355f9dbd5dbf222311/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-143970_kube-system_59f0a95d7161574212b71d16855f8aba_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/fbb3db5147c0906820605aacc381d292dcb26558af276ca0822f9a361be9d74b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"fbb3db5147c0906820605aacc381d292dcb26558af276ca0822f9a361be9d74b","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-143970_kube-system_59f0a95d7161574212b71d16855f8aba_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/59f0a95d7161574212b71d16855f8aba/containers/kube-apiserver/9f4f60b9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-c
ertificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/59f0a95d7161574212b71d16855f8aba/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-143970","io.kubernetes.pod.namespace":"kube-system","io.kub
ernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"59f0a95d7161574212b71d16855f8aba","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"59f0a95d7161574212b71d16855f8aba","kubernetes.io/config.seen":"2023-11-28T04:49:44.103455829Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0/userdata","rootfs":"/var/lib/containers/storage/overlay/7f2c073b09967d3019bd690a4347e21b9af955b1fa9cf1020194e55a9d4eab01/merged","created":"2023-11-28T04:50:56.90599444Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a289543","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log"
,"io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a289543\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.6624701Z","io.kubernetes.cri-o.Image":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.4","io.kubernetes.cri-o.ImageRef":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-l29m5\",\"io.kubernetes.pod.namespace\":\"kube-system
\",\"io.kubernetes.pod.uid\":\"864ee813-0e93-434e-8930-250e69f33cfe\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-l29m5_864ee813-0e93-434e-8930-250e69f33cfe/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7f2c073b09967d3019bd690a4347e21b9af955b1fa9cf1020194e55a9d4eab01/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-l29m5_kube-system_864ee813-0e93-434e-8930-250e69f33cfe_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7889be2eb2c8aa6d63702441b59ff5e7c20bfa178ce7c0bac9eab1235acbbe6a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7889be2eb2c8aa6d63702441b59ff5e7c20bfa178ce7c0bac9eab1235acbbe6a","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-l29m5_kube-system_864ee813-0e93-434e-8930-250e69f33cfe_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.
kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/864ee813-0e93-434e-8930-250e69f33cfe/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/864ee813-0e93-434e-8930-250e69f33cfe/containers/kube-proxy/fca7a2b4\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/864ee813-0e93-434e-8930-250e69f33cfe/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/servic
eaccount\",\"host_path\":\"/var/lib/kubelet/pods/864ee813-0e93-434e-8930-250e69f33cfe/volumes/kubernetes.io~projected/kube-api-access-znw2g\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-l29m5","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"864ee813-0e93-434e-8930-250e69f33cfe","kubernetes.io/config.seen":"2023-11-28T04:50:10.097191950Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f/userdata","rootfs":"/var/lib/containers/storage/overlay/dee6ff09c1178fedd258d9d3d94e567df46a16ee018e708ae9022dbb1cd8aa4a/merged","created":"2023-11-28T04:50:56.913296351Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b60ddd3e","
io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b60ddd3e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.714289198Z","io.kubernetes.cri-o.Image":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.4","io.kubernetes.cri-o.ImageRef":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd
3d209ca994b","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-143970\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"625e6f1fc84c5319830e6fb8edf69496\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-143970_625e6f1fc84c5319830e6fb8edf69496/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/dee6ff09c1178fedd258d9d3d94e567df46a16ee018e708ae9022dbb1cd8aa4a/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-143970_kube-system_625e6f1fc84c5319830e6fb8edf69496_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c277ccb679063c760a3a4abb72fb68144736e7d545a6e6bb058d65b00036f923/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c277ccb679063
c760a3a4abb72fb68144736e7d545a6e6bb058d65b00036f923","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-143970_kube-system_625e6f1fc84c5319830e6fb8edf69496_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/625e6f1fc84c5319830e6fb8edf69496/containers/kube-controller-manager/a39536ff\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/625e6f1fc84c5319830e6fb8edf69496/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":
0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-143970","io.kubernetes.pod.namespace":"kube-system","io.kub
ernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"625e6f1fc84c5319830e6fb8edf69496","kubernetes.io/config.hash":"625e6f1fc84c5319830e6fb8edf69496","kubernetes.io/config.seen":"2023-11-28T04:49:44.103447009Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868/userdata","rootfs":"/var/lib/containers/storage/overlay/958c51ab0cc6f080b2e125284bc4ef74765f7e348d1b9753b100e937768f802b/merged","created":"2023-11-28T04:50:56.9570733Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2fbd0c71","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annota
tions":"{\"io.kubernetes.container.hash\":\"2fbd0c71\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-28T04:50:56.73444519Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-nxh4c\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f4689f10-5d67-46e1-85cb-7aadda9b847b\"}","io.
kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-nxh4c_f4689f10-5d67-46e1-85cb-7aadda9b847b/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/958c51ab0cc6f080b2e125284bc4ef74765f7e348d1b9753b100e937768f802b/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-nxh4c_kube-system_f4689f10-5d67-46e1-85cb-7aadda9b847b_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/643404fca0a2c7364e34b414e70c3b01b8be086d6b2664b64903c920c9d1c831/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"643404fca0a2c7364e34b414e70c3b01b8be086d6b2664b64903c920c9d1c831","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-nxh4c_kube-system_f4689f10-5d67-46e1-85cb-7aadda9b847b_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":
\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f4689f10-5d67-46e1-85cb-7aadda9b847b/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f4689f10-5d67-46e1-85cb-7aadda9b847b/containers/kindnet-cni/35376fa6\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f4689f10-5d67-46e1-85cb-7aadda9b847b/volumes/kubernetes.io~projected/kube-api-access-lvn82\",\"readonly\":true,\"propagatio
n\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-nxh4c","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f4689f10-5d67-46e1-85cb-7aadda9b847b","kubernetes.io/config.seen":"2023-11-28T04:50:10.149612055Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1128 04:51:08.700841 1366652 cri.go:126] list returned 7 containers
	I1128 04:51:08.700857 1366652 cri.go:129] container: {ID:16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e Status:stopped}
	I1128 04:51:08.700880 1366652 cri.go:135] skipping {16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e stopped}: state = "stopped", want "paused"
	I1128 04:51:08.700901 1366652 cri.go:129] container: {ID:194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708 Status:stopped}
	I1128 04:51:08.700909 1366652 cri.go:135] skipping {194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708 stopped}: state = "stopped", want "paused"
	I1128 04:51:08.700915 1366652 cri.go:129] container: {ID:323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20 Status:stopped}
	I1128 04:51:08.700928 1366652 cri.go:135] skipping {323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20 stopped}: state = "stopped", want "paused"
	I1128 04:51:08.700938 1366652 cri.go:129] container: {ID:74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee Status:stopped}
	I1128 04:51:08.700945 1366652 cri.go:135] skipping {74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee stopped}: state = "stopped", want "paused"
	I1128 04:51:08.700955 1366652 cri.go:129] container: {ID:8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0 Status:stopped}
	I1128 04:51:08.700962 1366652 cri.go:135] skipping {8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0 stopped}: state = "stopped", want "paused"
	I1128 04:51:08.700968 1366652 cri.go:129] container: {ID:b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f Status:stopped}
	I1128 04:51:08.700981 1366652 cri.go:135] skipping {b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f stopped}: state = "stopped", want "paused"
	I1128 04:51:08.700992 1366652 cri.go:129] container: {ID:b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868 Status:stopped}
	I1128 04:51:08.700998 1366652 cri.go:135] skipping {b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868 stopped}: state = "stopped", want "paused"
	I1128 04:51:08.701070 1366652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:51:08.712106 1366652 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:51:08.712130 1366652 kubeadm.go:636] restartCluster start
	I1128 04:51:08.712184 1366652 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:51:08.722672 1366652 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:08.723346 1366652 kubeconfig.go:92] found "pause-143970" server: "https://192.168.67.2:8443"
	I1128 04:51:08.724379 1366652 kapi.go:59] client config for pause-143970: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:51:08.725370 1366652 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:51:08.737927 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:08.738034 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:08.750185 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:08.750206 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:08.750262 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:08.762502 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:09.263228 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:09.263346 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:09.275585 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:09.763284 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:09.763403 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:09.775217 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:10.262663 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:10.262764 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:10.277752 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:10.763494 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:10.763582 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:10.776300 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:11.262734 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:11.262867 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:11.276257 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:11.763075 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:11.763209 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:11.775866 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:12.263518 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:12.263654 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:12.276634 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:12.762678 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:12.762788 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:12.775599 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:13.263306 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:13.263405 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:13.275056 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:13.762649 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:13.762735 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:13.774934 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:14.262599 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:14.262688 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:14.275104 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:14.762644 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:14.762742 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:14.774914 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:15.263617 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:15.263717 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:15.276675 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:15.763102 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:15.763186 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:15.776638 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:16.263193 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:16.263264 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:16.278171 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:16.762990 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:16.763072 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:16.775631 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:17.263263 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:17.263337 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:17.277757 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:17.763416 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:17.763551 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:17.777153 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:18.262753 1366652 api_server.go:166] Checking apiserver status ...
	I1128 04:51:18.262856 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:51:18.274699 1366652 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:18.738481 1366652 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:51:18.738541 1366652 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:51:18.738554 1366652 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:51:18.738648 1366652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:51:18.781555 1366652 cri.go:89] found id: "194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708"
	I1128 04:51:18.781582 1366652 cri.go:89] found id: "b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868"
	I1128 04:51:18.781587 1366652 cri.go:89] found id: "b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f"
	I1128 04:51:18.781592 1366652 cri.go:89] found id: "74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee"
	I1128 04:51:18.781596 1366652 cri.go:89] found id: "8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0"
	I1128 04:51:18.781601 1366652 cri.go:89] found id: "323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20"
	I1128 04:51:18.781605 1366652 cri.go:89] found id: "16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e"
	I1128 04:51:18.781609 1366652 cri.go:89] found id: ""
	I1128 04:51:18.781615 1366652 cri.go:234] Stopping containers: [194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708 b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868 b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f 74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee 8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0 323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20 16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e]
	I1128 04:51:18.781674 1366652 ssh_runner.go:195] Run: which crictl
	I1128 04:51:18.786241 1366652 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708 b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868 b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f 74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee 8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0 323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20 16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e
	I1128 04:51:18.863142 1366652 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:51:18.967698 1366652 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:51:18.978696 1366652 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov 28 04:49 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov 28 04:49 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Nov 28 04:49 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov 28 04:49 /etc/kubernetes/scheduler.conf
	
	I1128 04:51:18.978766 1366652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1128 04:51:18.989573 1366652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1128 04:51:19.000212 1366652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1128 04:51:19.012084 1366652 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:19.012161 1366652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1128 04:51:19.023042 1366652 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1128 04:51:19.033821 1366652 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:51:19.033885 1366652 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1128 04:51:19.044589 1366652 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:51:19.056939 1366652 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:51:19.056965 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:19.165596 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:20.282783 1366652 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.117153629s)
	I1128 04:51:20.282812 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:20.499377 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:20.599207 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:20.699402 1366652 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:51:20.699475 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:20.740944 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:21.268645 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:21.768479 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:22.268913 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:22.293262 1366652 api_server.go:72] duration metric: took 1.593858945s to wait for apiserver process to appear ...
	I1128 04:51:22.293291 1366652 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:51:22.293308 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:27.294617 1366652 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1128 04:51:27.294652 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:27.657541 1366652 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:51:27.657583 1366652 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:51:28.157816 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:28.168784 1366652 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:51:28.168819 1366652 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:51:28.657716 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:28.668038 1366652 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1128 04:51:28.688532 1366652 api_server.go:141] control plane version: v1.28.4
	I1128 04:51:28.688568 1366652 api_server.go:131] duration metric: took 6.39526955s to wait for apiserver health ...
	I1128 04:51:28.688578 1366652 cni.go:84] Creating CNI manager for ""
	I1128 04:51:28.688584 1366652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:51:28.691677 1366652 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 04:51:28.696646 1366652 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:51:28.704970 1366652 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 04:51:28.704991 1366652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:51:28.755634 1366652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:51:30.131138 1366652 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.37546283s)
	I1128 04:51:30.131172 1366652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:51:30.145621 1366652 system_pods.go:59] 7 kube-system pods found
	I1128 04:51:30.145670 1366652 system_pods.go:61] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:51:30.145703 1366652 system_pods.go:61] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:51:30.145718 1366652 system_pods.go:61] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:30.145731 1366652 system_pods.go:61] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:51:30.145746 1366652 system_pods.go:61] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:51:30.145753 1366652 system_pods.go:61] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:30.145766 1366652 system_pods.go:61] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:51:30.145798 1366652 system_pods.go:74] duration metric: took 14.616163ms to wait for pod list to return data ...
	I1128 04:51:30.145828 1366652 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:51:30.150412 1366652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:51:30.150456 1366652 node_conditions.go:123] node cpu capacity is 2
	I1128 04:51:30.150470 1366652 node_conditions.go:105] duration metric: took 4.6182ms to run NodePressure ...
	I1128 04:51:30.150495 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:30.463619 1366652 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:51:30.470959 1366652 kubeadm.go:787] kubelet initialised
	I1128 04:51:30.471006 1366652 kubeadm.go:788] duration metric: took 7.359224ms waiting for restarted kubelet to initialise ...
	I1128 04:51:30.471015 1366652 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:30.479093 1366652 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:32.505917 1366652 pod_ready.go:92] pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:32.505945 1366652 pod_ready.go:81] duration metric: took 2.026818124s waiting for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:32.505958 1366652 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.527276 1366652 pod_ready.go:92] pod "etcd-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.527318 1366652 pod_ready.go:81] duration metric: took 1.021347081s waiting for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.527333 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.534523 1366652 pod_ready.go:92] pod "kube-apiserver-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.534552 1366652 pod_ready.go:81] duration metric: took 7.211105ms waiting for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.534564 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.735118 1366652 pod_ready.go:92] pod "kube-controller-manager-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.735146 1366652 pod_ready.go:81] duration metric: took 200.564503ms waiting for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.735159 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:34.134677 1366652 pod_ready.go:92] pod "kube-proxy-l29m5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:34.134702 1366652 pod_ready.go:81] duration metric: took 399.53662ms waiting for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:34.134714 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:36.443255 1366652 pod_ready.go:102] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"False"
	I1128 04:51:38.444024 1366652 pod_ready.go:102] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"False"
	I1128 04:51:40.444791 1366652 pod_ready.go:92] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.444814 1366652 pod_ready.go:81] duration metric: took 6.310092376s waiting for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.444823 1366652 pod_ready.go:38] duration metric: took 9.973766517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:40.444839 1366652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:51:40.456537 1366652 ops.go:34] apiserver oom_adj: -16
	I1128 04:51:40.456614 1366652 kubeadm.go:640] restartCluster took 31.744473113s
	I1128 04:51:40.456636 1366652 kubeadm.go:406] StartCluster complete in 31.843899524s
	I1128 04:51:40.456713 1366652 settings.go:142] acquiring lock: {Name:mk51bec1305a61d1e5f21881e1d4b01dfafff7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:40.456832 1366652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:51:40.457684 1366652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/kubeconfig: {Name:mkdd24900acdf0a7a11c60f4e6d81c9963f4153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:40.458023 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:51:40.458411 1366652 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:51:40.458720 1366652 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:51:40.462401 1366652 out.go:177] * Enabled addons: 
	I1128 04:51:40.459934 1366652 kapi.go:59] client config for pause-143970: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:51:40.464696 1366652 addons.go:502] enable addons completed in 5.993972ms: enabled=[]
	I1128 04:51:40.469355 1366652 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-143970" context rescaled to 1 replicas
	I1128 04:51:40.469395 1366652 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:51:40.471386 1366652 out.go:177] * Verifying Kubernetes components...
	I1128 04:51:40.473192 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:51:40.658375 1366652 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 04:51:40.658429 1366652 node_ready.go:35] waiting up to 6m0s for node "pause-143970" to be "Ready" ...
	I1128 04:51:40.662633 1366652 node_ready.go:49] node "pause-143970" has status "Ready":"True"
	I1128 04:51:40.662654 1366652 node_ready.go:38] duration metric: took 4.212729ms waiting for node "pause-143970" to be "Ready" ...
	I1128 04:51:40.662666 1366652 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:40.671706 1366652 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.680813 1366652 pod_ready.go:92] pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.680885 1366652 pod_ready.go:81] duration metric: took 9.102378ms waiting for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.680918 1366652 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.712981 1366652 pod_ready.go:92] pod "etcd-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.713005 1366652 pod_ready.go:81] duration metric: took 32.066002ms waiting for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.713022 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.934601 1366652 pod_ready.go:92] pod "kube-apiserver-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.934622 1366652 pod_ready.go:81] duration metric: took 221.592391ms waiting for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.934634 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.335746 1366652 pod_ready.go:92] pod "kube-controller-manager-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:41.335777 1366652 pod_ready.go:81] duration metric: took 401.135898ms waiting for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.335789 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.735161 1366652 pod_ready.go:92] pod "kube-proxy-l29m5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:41.735190 1366652 pod_ready.go:81] duration metric: took 399.387969ms waiting for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.735203 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:42.140039 1366652 pod_ready.go:92] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:42.140526 1366652 pod_ready.go:81] duration metric: took 405.297666ms waiting for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:42.140583 1366652 pod_ready.go:38] duration metric: took 1.4779064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:42.140618 1366652 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:51:42.140745 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:42.174310 1366652 api_server.go:72] duration metric: took 1.704883298s to wait for apiserver process to appear ...
	I1128 04:51:42.174335 1366652 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:51:42.174353 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:42.187802 1366652 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1128 04:51:42.189709 1366652 api_server.go:141] control plane version: v1.28.4
	I1128 04:51:42.189759 1366652 api_server.go:131] duration metric: took 15.416405ms to wait for apiserver health ...
	I1128 04:51:42.189777 1366652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:51:42.340088 1366652 system_pods.go:59] 7 kube-system pods found
	I1128 04:51:42.340187 1366652 system_pods.go:61] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running
	I1128 04:51:42.340215 1366652 system_pods.go:61] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running
	I1128 04:51:42.340240 1366652 system_pods.go:61] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:42.340261 1366652 system_pods.go:61] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running
	I1128 04:51:42.340282 1366652 system_pods.go:61] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running
	I1128 04:51:42.340313 1366652 system_pods.go:61] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:42.340333 1366652 system_pods.go:61] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running
	I1128 04:51:42.340354 1366652 system_pods.go:74] duration metric: took 150.570298ms to wait for pod list to return data ...
	I1128 04:51:42.340383 1366652 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:51:42.534711 1366652 default_sa.go:45] found service account: "default"
	I1128 04:51:42.534740 1366652 default_sa.go:55] duration metric: took 194.336793ms for default service account to be created ...
	I1128 04:51:42.534751 1366652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:51:42.738771 1366652 system_pods.go:86] 7 kube-system pods found
	I1128 04:51:42.738853 1366652 system_pods.go:89] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running
	I1128 04:51:42.738861 1366652 system_pods.go:89] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running
	I1128 04:51:42.738866 1366652 system_pods.go:89] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:42.738902 1366652 system_pods.go:89] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running
	I1128 04:51:42.738910 1366652 system_pods.go:89] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running
	I1128 04:51:42.738916 1366652 system_pods.go:89] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:42.738920 1366652 system_pods.go:89] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running
	I1128 04:51:42.738928 1366652 system_pods.go:126] duration metric: took 204.171344ms to wait for k8s-apps to be running ...
	I1128 04:51:42.738937 1366652 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:51:42.739005 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:51:42.757301 1366652 system_svc.go:56] duration metric: took 18.355869ms WaitForService to wait for kubelet.
	I1128 04:51:42.757342 1366652 kubeadm.go:581] duration metric: took 2.287924128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:51:42.757361 1366652 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:51:42.935332 1366652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:51:42.935360 1366652 node_conditions.go:123] node cpu capacity is 2
	I1128 04:51:42.935372 1366652 node_conditions.go:105] duration metric: took 177.980668ms to run NodePressure ...
	I1128 04:51:42.935384 1366652 start.go:228] waiting for startup goroutines ...
	I1128 04:51:42.935392 1366652 start.go:233] waiting for cluster config update ...
	I1128 04:51:42.935399 1366652 start.go:242] writing updated cluster config ...
	I1128 04:51:42.935694 1366652 ssh_runner.go:195] Run: rm -f paused
	I1128 04:51:43.024283 1366652 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:51:43.026867 1366652 out.go:177] * Done! kubectl is now configured to use "pause-143970" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-143970
helpers_test.go:235: (dbg) docker inspect pause-143970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341",
	        "Created": "2023-11-28T04:49:21.318614701Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1361646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:49:22.066627165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/hostname",
	        "HostsPath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/hosts",
	        "LogPath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341-json.log",
	        "Name": "/pause-143970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-143970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-143970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b-init/diff:/var/lib/docker/overlay2/cc610f7b23c869d03809246385f10f80b89207e6d90717a6a4867696f2289751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-143970",
	                "Source": "/var/lib/docker/volumes/pause-143970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-143970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-143970",
	                "name.minikube.sigs.k8s.io": "pause-143970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d91902a35caaa892af732818b5a97a69b6987d87bb4cb82d87efd090ce0cc1b1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34468"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34466"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d91902a35caa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-143970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4431c444a71e",
	                        "pause-143970"
	                    ],
	                    "NetworkID": "686ec87fec55bf8535fe95f40bf905d464dfca4d81da830cc1ce5edd5eab5b27",
	                    "EndpointID": "a9b0fe7b75a5cd0126a4d5bf9428c646c3307aa51dbb1ab5ca3a35b48980de97",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-143970 -n pause-143970
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-143970 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-143970 logs -n 25: (3.296480005s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start   | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:44 UTC | 28 Nov 23 04:45 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | --wait=true --preload=false    |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                             |         |         |                     |                     |
	| image   | test-preload-592962 image pull | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:45 UTC | 28 Nov 23 04:45 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                             |         |         |                     |                     |
	| stop    | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:45 UTC | 28 Nov 23 04:46 UTC |
	| start   | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:46 UTC | 28 Nov 23 04:47 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| image   | test-preload-592962 image list | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	| delete  | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	| start   | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	|         | --memory=2048 --driver=docker  |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC | 28 Nov 23 04:48 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC | 28 Nov 23 04:49 UTC |
	| start   | -p insufficient-storage-576814 | insufficient-storage-576814 | jenkins | v1.32.0 | 28 Nov 23 04:49 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-576814 | insufficient-storage-576814 | jenkins | v1.32.0 | 28 Nov 23 04:49 UTC | 28 Nov 23 04:49 UTC |
	| start   | -p pause-143970 --memory=2048  | pause-143970                | jenkins | v1.32.0 | 28 Nov 23 04:49 UTC | 28 Nov 23 04:50 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-143970                | pause-143970                | jenkins | v1.32.0 | 28 Nov 23 04:50 UTC | 28 Nov 23 04:51 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-934743      | missing-upgrade-934743      | jenkins | v1.32.0 | 28 Nov 23 04:50 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-934743      | missing-upgrade-934743      | jenkins | v1.32.0 | 28 Nov 23 04:51 UTC | 28 Nov 23 04:51 UTC |
	| start   | -p kubernetes-upgrade-541146   | kubernetes-upgrade-541146   | jenkins | v1.32.0 | 28 Nov 23 04:51 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:51:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:51:29.534429 1370266 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:51:29.534671 1370266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:51:29.534691 1370266 out.go:309] Setting ErrFile to fd 2...
	I1128 04:51:29.534710 1370266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:51:29.535027 1370266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:51:29.535461 1370266 out.go:303] Setting JSON to false
	I1128 04:51:29.536638 1370266 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27224,"bootTime":1701119865,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:51:29.536763 1370266 start.go:138] virtualization:  
	I1128 04:51:29.539403 1370266 out.go:177] * [kubernetes-upgrade-541146] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:51:29.545610 1370266 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:51:29.545683 1370266 notify.go:220] Checking for updates...
	I1128 04:51:29.547708 1370266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:51:29.549593 1370266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:51:29.551477 1370266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:51:29.553384 1370266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:51:29.555381 1370266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:51:29.557971 1370266 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:51:29.558061 1370266 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:51:29.613068 1370266 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:51:29.613179 1370266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:51:29.757790 1370266 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:51:29.747682546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:51:29.757888 1370266 docker.go:295] overlay module found
	I1128 04:51:29.760405 1370266 out.go:177] * Using the docker driver based on user configuration
	I1128 04:51:29.763818 1370266 start.go:298] selected driver: docker
	I1128 04:51:29.763838 1370266 start.go:902] validating driver "docker" against <nil>
	I1128 04:51:29.763852 1370266 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:51:29.764488 1370266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:51:29.888427 1370266 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:51:29.879228974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:51:29.888593 1370266 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 04:51:29.888846 1370266 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1128 04:51:29.891304 1370266 out.go:177] * Using Docker driver with root privileges
	I1128 04:51:29.893353 1370266 cni.go:84] Creating CNI manager for ""
	I1128 04:51:29.893377 1370266 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:51:29.893388 1370266 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 04:51:29.893402 1370266 start_flags.go:323] config:
	{Name:kubernetes-upgrade-541146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:51:29.895608 1370266 out.go:177] * Starting control plane node kubernetes-upgrade-541146 in cluster kubernetes-upgrade-541146
	I1128 04:51:29.897331 1370266 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:51:29.899428 1370266 out.go:177] * Pulling base image ...
	I1128 04:51:29.901356 1370266 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:51:29.901412 1370266 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1128 04:51:29.901426 1370266 cache.go:56] Caching tarball of preloaded images
	I1128 04:51:29.901508 1370266 preload.go:174] Found /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 04:51:29.901523 1370266 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1128 04:51:29.901638 1370266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/config.json ...
	I1128 04:51:29.901663 1370266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/config.json: {Name:mk586f1b37b10a417886f0c595fb5bb4b3c8220d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:29.901831 1370266 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:51:29.934786 1370266 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 04:51:29.934815 1370266 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 04:51:29.934864 1370266 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:51:29.934929 1370266 start.go:365] acquiring machines lock for kubernetes-upgrade-541146: {Name:mk7612ca682106ebe84f315fb9128dfcbb3ccfee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:51:29.935546 1370266 start.go:369] acquired machines lock for "kubernetes-upgrade-541146" in 593.343µs
	I1128 04:51:29.935586 1370266 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-541146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541146 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:51:29.935673 1370266 start.go:125] createHost starting for "" (driver="docker")
	I1128 04:51:28.696646 1366652 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:51:28.704970 1366652 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 04:51:28.704991 1366652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:51:28.755634 1366652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:51:30.131138 1366652 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.37546283s)
	I1128 04:51:30.131172 1366652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:51:30.145621 1366652 system_pods.go:59] 7 kube-system pods found
	I1128 04:51:30.145670 1366652 system_pods.go:61] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:51:30.145703 1366652 system_pods.go:61] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:51:30.145718 1366652 system_pods.go:61] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:30.145731 1366652 system_pods.go:61] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:51:30.145746 1366652 system_pods.go:61] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:51:30.145753 1366652 system_pods.go:61] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:30.145766 1366652 system_pods.go:61] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:51:30.145798 1366652 system_pods.go:74] duration metric: took 14.616163ms to wait for pod list to return data ...
	I1128 04:51:30.145828 1366652 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:51:30.150412 1366652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:51:30.150456 1366652 node_conditions.go:123] node cpu capacity is 2
	I1128 04:51:30.150470 1366652 node_conditions.go:105] duration metric: took 4.6182ms to run NodePressure ...
	I1128 04:51:30.150495 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:30.463619 1366652 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:51:30.470959 1366652 kubeadm.go:787] kubelet initialised
	I1128 04:51:30.471006 1366652 kubeadm.go:788] duration metric: took 7.359224ms waiting for restarted kubelet to initialise ...
	I1128 04:51:30.471015 1366652 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:30.479093 1366652 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:29.940451 1370266 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1128 04:51:29.940804 1370266 start.go:159] libmachine.API.Create for "kubernetes-upgrade-541146" (driver="docker")
	I1128 04:51:29.940838 1370266 client.go:168] LocalClient.Create starting
	I1128 04:51:29.940917 1370266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:51:29.940959 1370266 main.go:141] libmachine: Decoding PEM data...
	I1128 04:51:29.940980 1370266 main.go:141] libmachine: Parsing certificate...
	I1128 04:51:29.941040 1370266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:51:29.941064 1370266 main.go:141] libmachine: Decoding PEM data...
	I1128 04:51:29.941079 1370266 main.go:141] libmachine: Parsing certificate...
	I1128 04:51:29.941591 1370266 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-541146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 04:51:29.967152 1370266 cli_runner.go:211] docker network inspect kubernetes-upgrade-541146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 04:51:29.967234 1370266 network_create.go:281] running [docker network inspect kubernetes-upgrade-541146] to gather additional debugging logs...
	I1128 04:51:29.967251 1370266 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-541146
	W1128 04:51:29.992515 1370266 cli_runner.go:211] docker network inspect kubernetes-upgrade-541146 returned with exit code 1
	I1128 04:51:29.992543 1370266 network_create.go:284] error running [docker network inspect kubernetes-upgrade-541146]: docker network inspect kubernetes-upgrade-541146: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-541146 not found
	I1128 04:51:29.992555 1370266 network_create.go:286] output of [docker network inspect kubernetes-upgrade-541146]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-541146 not found
	
	** /stderr **
	I1128 04:51:29.992685 1370266 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:51:30.034613 1370266 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-457410d7183c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:60:a5:a2:7c} reservation:<nil>}
	I1128 04:51:30.034964 1370266 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0d78a22dd546 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bd:04:fe:9e} reservation:<nil>}
	I1128 04:51:30.036066 1370266 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-686ec87fec55 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a3:ec:05:d2} reservation:<nil>}
	I1128 04:51:30.036763 1370266 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025f6360}
	I1128 04:51:30.036829 1370266 network_create.go:124] attempt to create docker network kubernetes-upgrade-541146 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1128 04:51:30.036927 1370266 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 kubernetes-upgrade-541146
	I1128 04:51:30.187702 1370266 network_create.go:108] docker network kubernetes-upgrade-541146 192.168.76.0/24 created
	I1128 04:51:30.187740 1370266 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-541146" container
	I1128 04:51:30.187818 1370266 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:51:30.213072 1370266 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-541146 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:51:30.241497 1370266 oci.go:103] Successfully created a docker volume kubernetes-upgrade-541146
	I1128 04:51:30.241584 1370266 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-541146-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --entrypoint /usr/bin/test -v kubernetes-upgrade-541146:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 04:51:30.993170 1370266 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-541146
	I1128 04:51:30.993229 1370266 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:51:30.993252 1370266 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 04:51:30.993346 1370266 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-541146:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 04:51:32.505917 1366652 pod_ready.go:92] pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:32.505945 1366652 pod_ready.go:81] duration metric: took 2.026818124s waiting for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:32.505958 1366652 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.527276 1366652 pod_ready.go:92] pod "etcd-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.527318 1366652 pod_ready.go:81] duration metric: took 1.021347081s waiting for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.527333 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.534523 1366652 pod_ready.go:92] pod "kube-apiserver-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.534552 1366652 pod_ready.go:81] duration metric: took 7.211105ms waiting for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.534564 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.735118 1366652 pod_ready.go:92] pod "kube-controller-manager-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.735146 1366652 pod_ready.go:81] duration metric: took 200.564503ms waiting for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.735159 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:34.134677 1366652 pod_ready.go:92] pod "kube-proxy-l29m5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:34.134702 1366652 pod_ready.go:81] duration metric: took 399.53662ms waiting for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:34.134714 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:36.443255 1366652 pod_ready.go:102] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"False"
	I1128 04:51:38.395206 1370266 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-541146:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (7.401804929s)
	I1128 04:51:38.395243 1370266 kic.go:203] duration metric: took 7.401988 seconds to extract preloaded images to volume
	W1128 04:51:38.395393 1370266 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:51:38.395507 1370266 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:51:38.475157 1370266 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-541146 --name kubernetes-upgrade-541146 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --network kubernetes-upgrade-541146 --ip 192.168.76.2 --volume kubernetes-upgrade-541146:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 04:51:38.857971 1370266 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-541146 --format={{.State.Running}}
	I1128 04:51:38.887429 1370266 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-541146 --format={{.State.Status}}
	I1128 04:51:38.915629 1370266 cli_runner.go:164] Run: docker exec kubernetes-upgrade-541146 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:51:39.011674 1370266 oci.go:144] the created container "kubernetes-upgrade-541146" has a running status.
	I1128 04:51:39.011719 1370266 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa...
	I1128 04:51:38.444024 1366652 pod_ready.go:102] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"False"
	I1128 04:51:40.444791 1366652 pod_ready.go:92] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.444814 1366652 pod_ready.go:81] duration metric: took 6.310092376s waiting for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.444823 1366652 pod_ready.go:38] duration metric: took 9.973766517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:40.444839 1366652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:51:40.456537 1366652 ops.go:34] apiserver oom_adj: -16
	I1128 04:51:40.456614 1366652 kubeadm.go:640] restartCluster took 31.744473113s
	I1128 04:51:40.456636 1366652 kubeadm.go:406] StartCluster complete in 31.843899524s
	I1128 04:51:40.456713 1366652 settings.go:142] acquiring lock: {Name:mk51bec1305a61d1e5f21881e1d4b01dfafff7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:40.456832 1366652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:51:40.457684 1366652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/kubeconfig: {Name:mkdd24900acdf0a7a11c60f4e6d81c9963f4153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:40.458023 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:51:40.458411 1366652 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:51:40.458720 1366652 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:51:40.462401 1366652 out.go:177] * Enabled addons: 
	I1128 04:51:40.459934 1366652 kapi.go:59] client config for pause-143970: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:51:40.464696 1366652 addons.go:502] enable addons completed in 5.993972ms: enabled=[]
	I1128 04:51:40.469355 1366652 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-143970" context rescaled to 1 replicas
	I1128 04:51:40.469395 1366652 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:51:40.471386 1366652 out.go:177] * Verifying Kubernetes components...
	I1128 04:51:40.473192 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:51:40.658375 1366652 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 04:51:40.658429 1366652 node_ready.go:35] waiting up to 6m0s for node "pause-143970" to be "Ready" ...
	I1128 04:51:40.662633 1366652 node_ready.go:49] node "pause-143970" has status "Ready":"True"
	I1128 04:51:40.662654 1366652 node_ready.go:38] duration metric: took 4.212729ms waiting for node "pause-143970" to be "Ready" ...
	I1128 04:51:40.662666 1366652 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:40.671706 1366652 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.680813 1366652 pod_ready.go:92] pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.680885 1366652 pod_ready.go:81] duration metric: took 9.102378ms waiting for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.680918 1366652 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.712981 1366652 pod_ready.go:92] pod "etcd-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.713005 1366652 pod_ready.go:81] duration metric: took 32.066002ms waiting for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.713022 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.934601 1366652 pod_ready.go:92] pod "kube-apiserver-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.934622 1366652 pod_ready.go:81] duration metric: took 221.592391ms waiting for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.934634 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.335746 1366652 pod_ready.go:92] pod "kube-controller-manager-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:41.335777 1366652 pod_ready.go:81] duration metric: took 401.135898ms waiting for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.335789 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.735161 1366652 pod_ready.go:92] pod "kube-proxy-l29m5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:41.735190 1366652 pod_ready.go:81] duration metric: took 399.387969ms waiting for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.735203 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:42.140039 1366652 pod_ready.go:92] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:42.140526 1366652 pod_ready.go:81] duration metric: took 405.297666ms waiting for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:42.140583 1366652 pod_ready.go:38] duration metric: took 1.4779064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:42.140618 1366652 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:51:42.140745 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:42.174310 1366652 api_server.go:72] duration metric: took 1.704883298s to wait for apiserver process to appear ...
	I1128 04:51:42.174335 1366652 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:51:42.174353 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:42.187802 1366652 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1128 04:51:42.189709 1366652 api_server.go:141] control plane version: v1.28.4
	I1128 04:51:42.189759 1366652 api_server.go:131] duration metric: took 15.416405ms to wait for apiserver health ...
	I1128 04:51:42.189777 1366652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:51:42.340088 1366652 system_pods.go:59] 7 kube-system pods found
	I1128 04:51:42.340187 1366652 system_pods.go:61] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running
	I1128 04:51:42.340215 1366652 system_pods.go:61] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running
	I1128 04:51:42.340240 1366652 system_pods.go:61] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:42.340261 1366652 system_pods.go:61] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running
	I1128 04:51:42.340282 1366652 system_pods.go:61] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running
	I1128 04:51:42.340313 1366652 system_pods.go:61] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:42.340333 1366652 system_pods.go:61] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running
	I1128 04:51:42.340354 1366652 system_pods.go:74] duration metric: took 150.570298ms to wait for pod list to return data ...
	I1128 04:51:42.340383 1366652 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:51:42.534711 1366652 default_sa.go:45] found service account: "default"
	I1128 04:51:42.534740 1366652 default_sa.go:55] duration metric: took 194.336793ms for default service account to be created ...
	I1128 04:51:42.534751 1366652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:51:42.738771 1366652 system_pods.go:86] 7 kube-system pods found
	I1128 04:51:42.738853 1366652 system_pods.go:89] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running
	I1128 04:51:42.738861 1366652 system_pods.go:89] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running
	I1128 04:51:42.738866 1366652 system_pods.go:89] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:42.738902 1366652 system_pods.go:89] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running
	I1128 04:51:42.738910 1366652 system_pods.go:89] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running
	I1128 04:51:42.738916 1366652 system_pods.go:89] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:42.738920 1366652 system_pods.go:89] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running
	I1128 04:51:42.738928 1366652 system_pods.go:126] duration metric: took 204.171344ms to wait for k8s-apps to be running ...
	I1128 04:51:42.738937 1366652 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:51:42.739005 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:51:42.757301 1366652 system_svc.go:56] duration metric: took 18.355869ms WaitForService to wait for kubelet.
	I1128 04:51:42.757342 1366652 kubeadm.go:581] duration metric: took 2.287924128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:51:42.757361 1366652 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:51:42.935332 1366652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:51:42.935360 1366652 node_conditions.go:123] node cpu capacity is 2
	I1128 04:51:42.935372 1366652 node_conditions.go:105] duration metric: took 177.980668ms to run NodePressure ...
	I1128 04:51:42.935384 1366652 start.go:228] waiting for startup goroutines ...
	I1128 04:51:42.935392 1366652 start.go:233] waiting for cluster config update ...
	I1128 04:51:42.935399 1366652 start.go:242] writing updated cluster config ...
	I1128 04:51:42.935694 1366652 ssh_runner.go:195] Run: rm -f paused
	I1128 04:51:43.024283 1366652 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:51:43.026867 1366652 out.go:177] * Done! kubectl is now configured to use "pause-143970" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.044650949Z" level=info msg="Creating container: kube-system/kube-proxy-l29m5/kube-proxy" id=cc02b490-cd55-428c-b535-188da4cba81e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.045005885Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.067125427Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bb328f8057f7d61623a761d19ea77aff23acdb689ad5d672684256891669f27a/merged/etc/passwd: no such file or directory"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.067176003Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bb328f8057f7d61623a761d19ea77aff23acdb689ad5d672684256891669f27a/merged/etc/group: no such file or directory"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.249312129Z" level=info msg="Created container e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16: kube-system/coredns-5dd5756b68-mxxmx/coredns" id=3fb1f18d-1a79-4f39-a4a7-ac22480e0530 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.249897160Z" level=info msg="Starting container: e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16" id=4bee7487-00ea-4bd6-b12e-a26224acdcb1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.270270471Z" level=info msg="Created container f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09: kube-system/kube-proxy-l29m5/kube-proxy" id=cc02b490-cd55-428c-b535-188da4cba81e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.271070565Z" level=info msg="Starting container: f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09" id=c74281ea-7090-49f4-b97c-28c9ca2a2e90 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.277127354Z" level=info msg="Started container" PID=3258 containerID=e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16 description=kube-system/coredns-5dd5756b68-mxxmx/coredns id=4bee7487-00ea-4bd6-b12e-a26224acdcb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7b41abe2e8d557d9cf90ace79dce20ab688613318ff481a2cd638503ca0644b
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.285441872Z" level=info msg="Created container 35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84: kube-system/kindnet-nxh4c/kindnet-cni" id=2a173381-050b-4f70-91e1-dc2ef9ff9fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.286228658Z" level=info msg="Starting container: 35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84" id=be372780-48bb-42fa-9579-14537bcecad9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.308700895Z" level=info msg="Started container" PID=3266 containerID=35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84 description=kube-system/kindnet-nxh4c/kindnet-cni id=be372780-48bb-42fa-9579-14537bcecad9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=643404fca0a2c7364e34b414e70c3b01b8be086d6b2664b64903c920c9d1c831
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.341430779Z" level=info msg="Started container" PID=3264 containerID=f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09 description=kube-system/kube-proxy-l29m5/kube-proxy id=c74281ea-7090-49f4-b97c-28c9ca2a2e90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7889be2eb2c8aa6d63702441b59ff5e7c20bfa178ce7c0bac9eab1235acbbe6a
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.707431169Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.721008248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.721043308Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.721058422Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.738045855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.738087332Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.793496147Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.825321240Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.825355742Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.825372718Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.860081250Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.860118731Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	35159b6382dd9       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   16 seconds ago      Running             kindnet-cni               2                   643404fca0a2c       kindnet-nxh4c
	f06f5f2557e36       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   16 seconds ago      Running             kube-proxy                2                   7889be2eb2c8a       kube-proxy-l29m5
	e709ce489bc40       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   16 seconds ago      Running             coredns                   2                   f7b41abe2e8d5       coredns-5dd5756b68-mxxmx
	c8bd390ee9f62       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   23 seconds ago      Running             kube-controller-manager   2                   c277ccb679063       kube-controller-manager-pause-143970
	09f9bf7b4d6c3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   23 seconds ago      Running             etcd                      2                   ec39115b318dd       etcd-pause-143970
	70d0cdc81c3fc       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   23 seconds ago      Running             kube-apiserver            2                   fbb3db5147c09       kube-apiserver-pause-143970
	329e40cdef7b2       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   23 seconds ago      Running             kube-scheduler            2                   7ded19db045a1       kube-scheduler-pause-143970
	194b96a9b24c3       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   47 seconds ago      Exited              kube-scheduler            1                   7ded19db045a1       kube-scheduler-pause-143970
	b43da3b0d1180       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   47 seconds ago      Exited              kindnet-cni               1                   643404fca0a2c       kindnet-nxh4c
	b03030592f77e       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   48 seconds ago      Exited              kube-controller-manager   1                   c277ccb679063       kube-controller-manager-pause-143970
	74c29b25ba174       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   48 seconds ago      Exited              kube-apiserver            1                   fbb3db5147c09       kube-apiserver-pause-143970
	8448df08bddbd       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   48 seconds ago      Exited              kube-proxy                1                   7889be2eb2c8a       kube-proxy-l29m5
	323f5946e5014       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   48 seconds ago      Exited              etcd                      1                   ec39115b318dd       etcd-pause-143970
	16891782906b6       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   48 seconds ago      Exited              coredns                   1                   f7b41abe2e8d5       coredns-5dd5756b68-mxxmx
	
	* 
	* ==> coredns [16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51852 - 53916 "HINFO IN 227728516948944030.7492445755128998669. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021311119s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56033 - 56771 "HINFO IN 5308972760632549043.379345747403652935. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.046246065s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-143970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-143970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=pause-143970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_50_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:49:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-143970
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:51:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:49:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:49:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:49:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:50:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-143970
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 12c93c2ee1834eb08542e13124f46040
	  System UUID:                451828b9-82a9-4939-ade2-4b87426ccaea
	  Boot ID:                    29ce650a-e22a-4e0d-bffe-126490eafcf6
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-mxxmx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     95s
	  kube-system                 etcd-pause-143970                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         107s
	  kube-system                 kindnet-nxh4c                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      95s
	  kube-system                 kube-apiserver-pause-143970             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-pause-143970    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-l29m5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-pause-143970             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node pause-143970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node pause-143970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x8 over 2m1s)  kubelet          Node pause-143970 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node pause-143970 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node pause-143970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node pause-143970 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           96s                  node-controller  Node pause-143970 event: Registered Node pause-143970 in Controller
	  Normal  NodeReady                63s                  kubelet          Node pause-143970 status is now: NodeReady
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 25s)    kubelet          Node pause-143970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 25s)    kubelet          Node pause-143970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x8 over 25s)    kubelet          Node pause-143970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                   node-controller  Node pause-143970 event: Registered Node pause-143970 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001139] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001123] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +0.003665] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=0000000000e844bc
	[  +0.001111] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=00000000dbb2bfbf
	[  +0.001088] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +2.166969] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001027] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000001868d326
	[  +0.001142] FS-Cache: O-key=[8] '4e415c0100000000'
	[  +0.000817] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001108] FS-Cache: N-key=[8] '4e415c0100000000'
	[  +0.392945] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001131] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000007d358928
	[  +0.001121] FS-Cache: O-key=[8] '54415c0100000000'
	[  +0.000768] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000007d93f4ca
	[  +0.001181] FS-Cache: N-key=[8] '54415c0100000000'
	
	* 
	* ==> etcd [09f9bf7b4d6c3b8a21b9d36ce96eede803014f28ccc7125c66e0827be2e5dc1f] <==
	* {"level":"info","ts":"2023-11-28T04:51:22.142167Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T04:51:22.142178Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T04:51:22.151924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-11-28T04:51:22.152057Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-11-28T04:51:22.152332Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:51:22.152377Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:51:22.167073Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T04:51:22.167274Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T04:51:22.167306Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T04:51:22.167376Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-11-28T04:51:22.16739Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-11-28T04:51:23.069548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-28T04:51:23.069606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:51:23.069623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-11-28T04:51:23.069636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.069645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.069655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.069664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.072963Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-143970 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:51:23.072996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:51:23.074316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-11-28T04:51:23.073041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:51:23.075492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:51:23.085743Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:51:23.085773Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20] <==
	* 
	* 
	* ==> kernel <==
	*  04:51:45 up  7:34,  0 users,  load average: 3.84, 2.64, 2.12
	Linux pause-143970 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84] <==
	* I1128 04:51:28.396157       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1128 04:51:28.396221       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1128 04:51:28.396358       1 main.go:116] setting mtu 1500 for CNI 
	I1128 04:51:28.396368       1 main.go:146] kindnetd IP family: "ipv4"
	I1128 04:51:28.396380       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1128 04:51:28.704416       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1128 04:51:28.704449       1 main.go:227] handling current node
	I1128 04:51:38.809542       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1128 04:51:38.809680       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868] <==
	* I1128 04:50:57.240409       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1128 04:50:57.240508       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1128 04:50:57.242011       1 main.go:116] setting mtu 1500 for CNI 
	I1128 04:50:57.242045       1 main.go:146] kindnetd IP family: "ipv4"
	I1128 04:50:57.242058       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [70d0cdc81c3fc749e2f656a70e54e4378cb5290ea907d9871dfbcb9586c7df78] <==
	* I1128 04:51:27.269504       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1128 04:51:27.270114       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 04:51:27.270278       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1128 04:51:27.235798       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1128 04:51:27.235825       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 04:51:27.558029       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 04:51:27.560493       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1128 04:51:27.569194       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1128 04:51:27.570120       1 shared_informer.go:318] Caches are synced for configmaps
	I1128 04:51:27.583093       1 aggregator.go:166] initial CRD sync complete...
	I1128 04:51:27.583177       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 04:51:27.583208       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 04:51:27.583261       1 cache.go:39] Caches are synced for autoregister controller
	I1128 04:51:27.645190       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1128 04:51:27.645286       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1128 04:51:27.654109       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1128 04:51:27.657246       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 04:51:27.665207       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1128 04:51:27.670669       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1128 04:51:28.305337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 04:51:30.109304       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1128 04:51:30.341021       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 04:51:30.357028       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 04:51:30.441048       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 04:51:30.451766       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee] <==
	* 
	* 
	* ==> kube-controller-manager [b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f] <==
	* 
	* 
	* ==> kube-controller-manager [c8bd390ee9f62c921e5e42f68c3cf5899edaaff707741d4264e01cc3df054ca5] <==
	* I1128 04:51:40.181608       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1128 04:51:40.181639       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1128 04:51:40.181670       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1128 04:51:40.201979       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1128 04:51:40.209121       1 shared_informer.go:318] Caches are synced for HPA
	I1128 04:51:40.213033       1 shared_informer.go:318] Caches are synced for disruption
	I1128 04:51:40.213084       1 shared_informer.go:318] Caches are synced for stateful set
	I1128 04:51:40.223740       1 shared_informer.go:318] Caches are synced for persistent volume
	I1128 04:51:40.230558       1 shared_informer.go:318] Caches are synced for PVC protection
	I1128 04:51:40.244379       1 shared_informer.go:318] Caches are synced for resource quota
	I1128 04:51:40.248421       1 shared_informer.go:318] Caches are synced for attach detach
	I1128 04:51:40.272372       1 shared_informer.go:318] Caches are synced for expand
	I1128 04:51:40.279249       1 shared_informer.go:318] Caches are synced for ephemeral
	I1128 04:51:40.284181       1 shared_informer.go:318] Caches are synced for daemon sets
	I1128 04:51:40.327986       1 shared_informer.go:318] Caches are synced for resource quota
	I1128 04:51:40.328791       1 shared_informer.go:318] Caches are synced for taint
	I1128 04:51:40.328911       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1128 04:51:40.329015       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-143970"
	I1128 04:51:40.329068       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1128 04:51:40.329098       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1128 04:51:40.329135       1 taint_manager.go:210] "Sending events to api server"
	I1128 04:51:40.329824       1 event.go:307] "Event occurred" object="pause-143970" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-143970 event: Registered Node pause-143970 in Controller"
	I1128 04:51:40.615686       1 shared_informer.go:318] Caches are synced for garbage collector
	I1128 04:51:40.677214       1 shared_informer.go:318] Caches are synced for garbage collector
	I1128 04:51:40.677267       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0] <==
	* I1128 04:50:57.381070       1 server_others.go:69] "Using iptables proxy"
	E1128 04:50:57.383763       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-143970": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09] <==
	* I1128 04:51:28.421124       1 server_others.go:69] "Using iptables proxy"
	I1128 04:51:28.454747       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1128 04:51:28.512153       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1128 04:51:28.514595       1 server_others.go:152] "Using iptables Proxier"
	I1128 04:51:28.514701       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1128 04:51:28.514743       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1128 04:51:28.514861       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:51:28.515133       1 server.go:846] "Version info" version="v1.28.4"
	I1128 04:51:28.515377       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:51:28.516531       1 config.go:188] "Starting service config controller"
	I1128 04:51:28.516622       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:51:28.516896       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:51:28.516940       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:51:28.517482       1 config.go:315] "Starting node config controller"
	I1128 04:51:28.517539       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:51:28.616989       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:51:28.617088       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:51:28.617621       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708] <==
	* 
	* 
	* ==> kube-scheduler [329e40cdef7b24c9817f8524403cf0fe3faa1f6b212525287fd1b82f14448b77] <==
	* I1128 04:51:24.522740       1 serving.go:348] Generated self-signed cert in-memory
	W1128 04:51:27.466778       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 04:51:27.466885       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:51:27.466918       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 04:51:27.466957       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 04:51:27.586180       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 04:51:27.586285       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:51:27.601573       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 04:51:27.601686       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 04:51:27.602255       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 04:51:27.604563       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 04:51:27.702249       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 28 04:51:21 pause-143970 kubelet[3002]: W1128 04:51:21.988689    3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:21 pause-143970 kubelet[3002]: E1128 04:51:21.988765    3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:22 pause-143970 kubelet[3002]: W1128 04:51:22.070584    3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-143970&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:22 pause-143970 kubelet[3002]: E1128 04:51:22.070876    3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-143970&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:22 pause-143970 kubelet[3002]: E1128 04:51:22.153941    3002 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-143970?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Nov 28 04:51:22 pause-143970 kubelet[3002]: I1128 04:51:22.260019    3002 kubelet_node_status.go:70] "Attempting to register node" node="pause-143970"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.674825    3002 kubelet_node_status.go:108] "Node was previously registered" node="pause-143970"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.674943    3002 kubelet_node_status.go:73] "Successfully registered node" node="pause-143970"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.677213    3002 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.678099    3002 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.717245    3002 apiserver.go:52] "Watching apiserver"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.721108    3002 topology_manager.go:215] "Topology Admit Handler" podUID="864ee813-0e93-434e-8930-250e69f33cfe" podNamespace="kube-system" podName="kube-proxy-l29m5"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.723176    3002 topology_manager.go:215] "Topology Admit Handler" podUID="dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5" podNamespace="kube-system" podName="coredns-5dd5756b68-mxxmx"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.723254    3002 topology_manager.go:215] "Topology Admit Handler" podUID="f4689f10-5d67-46e1-85cb-7aadda9b847b" podNamespace="kube-system" podName="kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.745183    3002 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812071    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/864ee813-0e93-434e-8930-250e69f33cfe-lib-modules\") pod \"kube-proxy-l29m5\" (UID: \"864ee813-0e93-434e-8930-250e69f33cfe\") " pod="kube-system/kube-proxy-l29m5"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812144    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4689f10-5d67-46e1-85cb-7aadda9b847b-lib-modules\") pod \"kindnet-nxh4c\" (UID: \"f4689f10-5d67-46e1-85cb-7aadda9b847b\") " pod="kube-system/kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812169    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f4689f10-5d67-46e1-85cb-7aadda9b847b-cni-cfg\") pod \"kindnet-nxh4c\" (UID: \"f4689f10-5d67-46e1-85cb-7aadda9b847b\") " pod="kube-system/kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812193    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4689f10-5d67-46e1-85cb-7aadda9b847b-xtables-lock\") pod \"kindnet-nxh4c\" (UID: \"f4689f10-5d67-46e1-85cb-7aadda9b847b\") " pod="kube-system/kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812262    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/864ee813-0e93-434e-8930-250e69f33cfe-xtables-lock\") pod \"kube-proxy-l29m5\" (UID: \"864ee813-0e93-434e-8930-250e69f33cfe\") " pod="kube-system/kube-proxy-l29m5"
	Nov 28 04:51:28 pause-143970 kubelet[3002]: I1128 04:51:28.024116    3002 scope.go:117] "RemoveContainer" containerID="b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868"
	Nov 28 04:51:28 pause-143970 kubelet[3002]: I1128 04:51:28.025583    3002 scope.go:117] "RemoveContainer" containerID="16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e"
	Nov 28 04:51:28 pause-143970 kubelet[3002]: I1128 04:51:28.033020    3002 scope.go:117] "RemoveContainer" containerID="8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0"
	Nov 28 04:51:29 pause-143970 kubelet[3002]: I1128 04:51:29.937432    3002 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 28 04:51:32 pause-143970 kubelet[3002]: I1128 04:51:32.027357    3002 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-143970 -n pause-143970
helpers_test.go:261: (dbg) Run:  kubectl --context pause-143970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-143970
helpers_test.go:235: (dbg) docker inspect pause-143970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341",
	        "Created": "2023-11-28T04:49:21.318614701Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1361646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T04:49:22.066627165Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/hostname",
	        "HostsPath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/hosts",
	        "LogPath": "/var/lib/docker/containers/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341/4431c444a71ebed60b73fd745ffc75377e1c1023f9f270965708ede872caf341-json.log",
	        "Name": "/pause-143970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-143970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-143970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b-init/diff:/var/lib/docker/overlay2/cc610f7b23c869d03809246385f10f80b89207e6d90717a6a4867696f2289751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/525e6336ef88ec947cf7ff68244f5956057143cd62a920c20b58fe13f3484a2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-143970",
	                "Source": "/var/lib/docker/volumes/pause-143970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-143970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-143970",
	                "name.minikube.sigs.k8s.io": "pause-143970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d91902a35caaa892af732818b5a97a69b6987d87bb4cb82d87efd090ce0cc1b1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34468"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34466"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d91902a35caa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-143970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4431c444a71e",
	                        "pause-143970"
	                    ],
	                    "NetworkID": "686ec87fec55bf8535fe95f40bf905d464dfca4d81da830cc1ce5edd5eab5b27",
	                    "EndpointID": "a9b0fe7b75a5cd0126a4d5bf9428c646c3307aa51dbb1ab5ca3a35b48980de97",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-143970 -n pause-143970
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-143970 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-143970 logs -n 25: (2.6996391s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start   | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:44 UTC | 28 Nov 23 04:45 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | --wait=true --preload=false    |                             |         |         |                     |                     |
	|         | --driver=docker                |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                             |         |         |                     |                     |
	| image   | test-preload-592962 image pull | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:45 UTC | 28 Nov 23 04:45 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                             |         |         |                     |                     |
	| stop    | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:45 UTC | 28 Nov 23 04:46 UTC |
	| start   | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:46 UTC | 28 Nov 23 04:47 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| image   | test-preload-592962 image list | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	| delete  | -p test-preload-592962         | test-preload-592962         | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	| start   | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	|         | --memory=2048 --driver=docker  |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:47 UTC | 28 Nov 23 04:47 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC | 28 Nov 23 04:48 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-020100       | scheduled-stop-020100       | jenkins | v1.32.0 | 28 Nov 23 04:48 UTC | 28 Nov 23 04:49 UTC |
	| start   | -p insufficient-storage-576814 | insufficient-storage-576814 | jenkins | v1.32.0 | 28 Nov 23 04:49 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-576814 | insufficient-storage-576814 | jenkins | v1.32.0 | 28 Nov 23 04:49 UTC | 28 Nov 23 04:49 UTC |
	| start   | -p pause-143970 --memory=2048  | pause-143970                | jenkins | v1.32.0 | 28 Nov 23 04:49 UTC | 28 Nov 23 04:50 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-143970                | pause-143970                | jenkins | v1.32.0 | 28 Nov 23 04:50 UTC | 28 Nov 23 04:51 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-934743      | missing-upgrade-934743      | jenkins | v1.32.0 | 28 Nov 23 04:50 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-934743      | missing-upgrade-934743      | jenkins | v1.32.0 | 28 Nov 23 04:51 UTC | 28 Nov 23 04:51 UTC |
	| start   | -p kubernetes-upgrade-541146   | kubernetes-upgrade-541146   | jenkins | v1.32.0 | 28 Nov 23 04:51 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:51:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:51:29.534429 1370266 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:51:29.534671 1370266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:51:29.534691 1370266 out.go:309] Setting ErrFile to fd 2...
	I1128 04:51:29.534710 1370266 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:51:29.535027 1370266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:51:29.535461 1370266 out.go:303] Setting JSON to false
	I1128 04:51:29.536638 1370266 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27224,"bootTime":1701119865,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:51:29.536763 1370266 start.go:138] virtualization:  
	I1128 04:51:29.539403 1370266 out.go:177] * [kubernetes-upgrade-541146] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:51:29.545610 1370266 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:51:29.545683 1370266 notify.go:220] Checking for updates...
	I1128 04:51:29.547708 1370266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:51:29.549593 1370266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:51:29.551477 1370266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:51:29.553384 1370266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:51:29.555381 1370266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:51:29.557971 1370266 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:51:29.558061 1370266 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:51:29.613068 1370266 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:51:29.613179 1370266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:51:29.757790 1370266 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:51:29.747682546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:51:29.757888 1370266 docker.go:295] overlay module found
	I1128 04:51:29.760405 1370266 out.go:177] * Using the docker driver based on user configuration
	I1128 04:51:29.763818 1370266 start.go:298] selected driver: docker
	I1128 04:51:29.763838 1370266 start.go:902] validating driver "docker" against <nil>
	I1128 04:51:29.763852 1370266 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:51:29.764488 1370266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:51:29.888427 1370266 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:51:29.879228974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:51:29.888593 1370266 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 04:51:29.888846 1370266 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1128 04:51:29.891304 1370266 out.go:177] * Using Docker driver with root privileges
	I1128 04:51:29.893353 1370266 cni.go:84] Creating CNI manager for ""
	I1128 04:51:29.893377 1370266 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:51:29.893388 1370266 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 04:51:29.893402 1370266 start_flags.go:323] config:
	{Name:kubernetes-upgrade-541146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541146 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:51:29.895608 1370266 out.go:177] * Starting control plane node kubernetes-upgrade-541146 in cluster kubernetes-upgrade-541146
	I1128 04:51:29.897331 1370266 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:51:29.899428 1370266 out.go:177] * Pulling base image ...
	I1128 04:51:29.901356 1370266 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:51:29.901412 1370266 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1128 04:51:29.901426 1370266 cache.go:56] Caching tarball of preloaded images
	I1128 04:51:29.901508 1370266 preload.go:174] Found /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1128 04:51:29.901523 1370266 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1128 04:51:29.901638 1370266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/config.json ...
	I1128 04:51:29.901663 1370266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/config.json: {Name:mk586f1b37b10a417886f0c595fb5bb4b3c8220d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:29.901831 1370266 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:51:29.934786 1370266 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 04:51:29.934815 1370266 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 04:51:29.934864 1370266 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:51:29.934929 1370266 start.go:365] acquiring machines lock for kubernetes-upgrade-541146: {Name:mk7612ca682106ebe84f315fb9128dfcbb3ccfee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:51:29.935546 1370266 start.go:369] acquired machines lock for "kubernetes-upgrade-541146" in 593.343µs
	I1128 04:51:29.935586 1370266 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-541146 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-541146 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:51:29.935673 1370266 start.go:125] createHost starting for "" (driver="docker")
	I1128 04:51:28.696646 1366652 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 04:51:28.704970 1366652 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 04:51:28.704991 1366652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 04:51:28.755634 1366652 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 04:51:30.131138 1366652 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.37546283s)
	I1128 04:51:30.131172 1366652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:51:30.145621 1366652 system_pods.go:59] 7 kube-system pods found
	I1128 04:51:30.145670 1366652 system_pods.go:61] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:51:30.145703 1366652 system_pods.go:61] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:51:30.145718 1366652 system_pods.go:61] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:30.145731 1366652 system_pods.go:61] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:51:30.145746 1366652 system_pods.go:61] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:51:30.145753 1366652 system_pods.go:61] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:30.145766 1366652 system_pods.go:61] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:51:30.145798 1366652 system_pods.go:74] duration metric: took 14.616163ms to wait for pod list to return data ...
	I1128 04:51:30.145828 1366652 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:51:30.150412 1366652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:51:30.150456 1366652 node_conditions.go:123] node cpu capacity is 2
	I1128 04:51:30.150470 1366652 node_conditions.go:105] duration metric: took 4.6182ms to run NodePressure ...
	I1128 04:51:30.150495 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:51:30.463619 1366652 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:51:30.470959 1366652 kubeadm.go:787] kubelet initialised
	I1128 04:51:30.471006 1366652 kubeadm.go:788] duration metric: took 7.359224ms waiting for restarted kubelet to initialise ...
	I1128 04:51:30.471015 1366652 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:30.479093 1366652 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:29.940451 1370266 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1128 04:51:29.940804 1370266 start.go:159] libmachine.API.Create for "kubernetes-upgrade-541146" (driver="docker")
	I1128 04:51:29.940838 1370266 client.go:168] LocalClient.Create starting
	I1128 04:51:29.940917 1370266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem
	I1128 04:51:29.940959 1370266 main.go:141] libmachine: Decoding PEM data...
	I1128 04:51:29.940980 1370266 main.go:141] libmachine: Parsing certificate...
	I1128 04:51:29.941040 1370266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem
	I1128 04:51:29.941064 1370266 main.go:141] libmachine: Decoding PEM data...
	I1128 04:51:29.941079 1370266 main.go:141] libmachine: Parsing certificate...
	I1128 04:51:29.941591 1370266 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-541146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1128 04:51:29.967152 1370266 cli_runner.go:211] docker network inspect kubernetes-upgrade-541146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1128 04:51:29.967234 1370266 network_create.go:281] running [docker network inspect kubernetes-upgrade-541146] to gather additional debugging logs...
	I1128 04:51:29.967251 1370266 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-541146
	W1128 04:51:29.992515 1370266 cli_runner.go:211] docker network inspect kubernetes-upgrade-541146 returned with exit code 1
	I1128 04:51:29.992543 1370266 network_create.go:284] error running [docker network inspect kubernetes-upgrade-541146]: docker network inspect kubernetes-upgrade-541146: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-541146 not found
	I1128 04:51:29.992555 1370266 network_create.go:286] output of [docker network inspect kubernetes-upgrade-541146]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-541146 not found
	
	** /stderr **
	I1128 04:51:29.992685 1370266 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:51:30.034613 1370266 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-457410d7183c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:60:a5:a2:7c} reservation:<nil>}
	I1128 04:51:30.034964 1370266 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0d78a22dd546 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bd:04:fe:9e} reservation:<nil>}
	I1128 04:51:30.036066 1370266 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-686ec87fec55 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a3:ec:05:d2} reservation:<nil>}
	I1128 04:51:30.036763 1370266 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025f6360}
	I1128 04:51:30.036829 1370266 network_create.go:124] attempt to create docker network kubernetes-upgrade-541146 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1128 04:51:30.036927 1370266 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 kubernetes-upgrade-541146
	I1128 04:51:30.187702 1370266 network_create.go:108] docker network kubernetes-upgrade-541146 192.168.76.0/24 created
	I1128 04:51:30.187740 1370266 kic.go:121] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-541146" container
	I1128 04:51:30.187818 1370266 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1128 04:51:30.213072 1370266 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-541146 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --label created_by.minikube.sigs.k8s.io=true
	I1128 04:51:30.241497 1370266 oci.go:103] Successfully created a docker volume kubernetes-upgrade-541146
	I1128 04:51:30.241584 1370266 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-541146-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --entrypoint /usr/bin/test -v kubernetes-upgrade-541146:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1128 04:51:30.993170 1370266 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-541146
	I1128 04:51:30.993229 1370266 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:51:30.993252 1370266 kic.go:194] Starting extracting preloaded images to volume ...
	I1128 04:51:30.993346 1370266 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-541146:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1128 04:51:32.505917 1366652 pod_ready.go:92] pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:32.505945 1366652 pod_ready.go:81] duration metric: took 2.026818124s waiting for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:32.505958 1366652 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.527276 1366652 pod_ready.go:92] pod "etcd-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.527318 1366652 pod_ready.go:81] duration metric: took 1.021347081s waiting for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.527333 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.534523 1366652 pod_ready.go:92] pod "kube-apiserver-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.534552 1366652 pod_ready.go:81] duration metric: took 7.211105ms waiting for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.534564 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.735118 1366652 pod_ready.go:92] pod "kube-controller-manager-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:33.735146 1366652 pod_ready.go:81] duration metric: took 200.564503ms waiting for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:33.735159 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:34.134677 1366652 pod_ready.go:92] pod "kube-proxy-l29m5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:34.134702 1366652 pod_ready.go:81] duration metric: took 399.53662ms waiting for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:34.134714 1366652 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:36.443255 1366652 pod_ready.go:102] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"False"
	I1128 04:51:38.395206 1370266 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-541146:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (7.401804929s)
	I1128 04:51:38.395243 1370266 kic.go:203] duration metric: took 7.401988 seconds to extract preloaded images to volume
	W1128 04:51:38.395393 1370266 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1128 04:51:38.395507 1370266 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1128 04:51:38.475157 1370266 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-541146 --name kubernetes-upgrade-541146 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-541146 --network kubernetes-upgrade-541146 --ip 192.168.76.2 --volume kubernetes-upgrade-541146:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 04:51:38.857971 1370266 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-541146 --format={{.State.Running}}
	I1128 04:51:38.887429 1370266 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-541146 --format={{.State.Status}}
	I1128 04:51:38.915629 1370266 cli_runner.go:164] Run: docker exec kubernetes-upgrade-541146 stat /var/lib/dpkg/alternatives/iptables
	I1128 04:51:39.011674 1370266 oci.go:144] the created container "kubernetes-upgrade-541146" has a running status.
	I1128 04:51:39.011719 1370266 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa...
	I1128 04:51:38.444024 1366652 pod_ready.go:102] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"False"
	I1128 04:51:40.444791 1366652 pod_ready.go:92] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.444814 1366652 pod_ready.go:81] duration metric: took 6.310092376s waiting for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.444823 1366652 pod_ready.go:38] duration metric: took 9.973766517s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:40.444839 1366652 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:51:40.456537 1366652 ops.go:34] apiserver oom_adj: -16
	I1128 04:51:40.456614 1366652 kubeadm.go:640] restartCluster took 31.744473113s
	I1128 04:51:40.456636 1366652 kubeadm.go:406] StartCluster complete in 31.843899524s
	I1128 04:51:40.456713 1366652 settings.go:142] acquiring lock: {Name:mk51bec1305a61d1e5f21881e1d4b01dfafff7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:40.456832 1366652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:51:40.457684 1366652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/kubeconfig: {Name:mkdd24900acdf0a7a11c60f4e6d81c9963f4153d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:51:40.458023 1366652 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:51:40.458411 1366652 config.go:182] Loaded profile config "pause-143970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:51:40.458720 1366652 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:51:40.462401 1366652 out.go:177] * Enabled addons: 
	I1128 04:51:40.459934 1366652 kapi.go:59] client config for pause-143970: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/pause-143970/client.key", CAFile:"/home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 04:51:40.464696 1366652 addons.go:502] enable addons completed in 5.993972ms: enabled=[]
	I1128 04:51:40.469355 1366652 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-143970" context rescaled to 1 replicas
	I1128 04:51:40.469395 1366652 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:51:40.471386 1366652 out.go:177] * Verifying Kubernetes components...
	I1128 04:51:40.473192 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:51:40.658375 1366652 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 04:51:40.658429 1366652 node_ready.go:35] waiting up to 6m0s for node "pause-143970" to be "Ready" ...
	I1128 04:51:40.662633 1366652 node_ready.go:49] node "pause-143970" has status "Ready":"True"
	I1128 04:51:40.662654 1366652 node_ready.go:38] duration metric: took 4.212729ms waiting for node "pause-143970" to be "Ready" ...
	I1128 04:51:40.662666 1366652 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:40.671706 1366652 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.680813 1366652 pod_ready.go:92] pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.680885 1366652 pod_ready.go:81] duration metric: took 9.102378ms waiting for pod "coredns-5dd5756b68-mxxmx" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.680918 1366652 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.712981 1366652 pod_ready.go:92] pod "etcd-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.713005 1366652 pod_ready.go:81] duration metric: took 32.066002ms waiting for pod "etcd-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.713022 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.934601 1366652 pod_ready.go:92] pod "kube-apiserver-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:40.934622 1366652 pod_ready.go:81] duration metric: took 221.592391ms waiting for pod "kube-apiserver-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:40.934634 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.335746 1366652 pod_ready.go:92] pod "kube-controller-manager-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:41.335777 1366652 pod_ready.go:81] duration metric: took 401.135898ms waiting for pod "kube-controller-manager-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.335789 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.735161 1366652 pod_ready.go:92] pod "kube-proxy-l29m5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:41.735190 1366652 pod_ready.go:81] duration metric: took 399.387969ms waiting for pod "kube-proxy-l29m5" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:41.735203 1366652 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:42.140039 1366652 pod_ready.go:92] pod "kube-scheduler-pause-143970" in "kube-system" namespace has status "Ready":"True"
	I1128 04:51:42.140526 1366652 pod_ready.go:81] duration metric: took 405.297666ms waiting for pod "kube-scheduler-pause-143970" in "kube-system" namespace to be "Ready" ...
	I1128 04:51:42.140583 1366652 pod_ready.go:38] duration metric: took 1.4779064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:51:42.140618 1366652 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:51:42.140745 1366652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:51:42.174310 1366652 api_server.go:72] duration metric: took 1.704883298s to wait for apiserver process to appear ...
	I1128 04:51:42.174335 1366652 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:51:42.174353 1366652 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1128 04:51:42.187802 1366652 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1128 04:51:42.189709 1366652 api_server.go:141] control plane version: v1.28.4
	I1128 04:51:42.189759 1366652 api_server.go:131] duration metric: took 15.416405ms to wait for apiserver health ...
	I1128 04:51:42.189777 1366652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:51:42.340088 1366652 system_pods.go:59] 7 kube-system pods found
	I1128 04:51:42.340187 1366652 system_pods.go:61] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running
	I1128 04:51:42.340215 1366652 system_pods.go:61] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running
	I1128 04:51:42.340240 1366652 system_pods.go:61] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:42.340261 1366652 system_pods.go:61] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running
	I1128 04:51:42.340282 1366652 system_pods.go:61] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running
	I1128 04:51:42.340313 1366652 system_pods.go:61] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:42.340333 1366652 system_pods.go:61] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running
	I1128 04:51:42.340354 1366652 system_pods.go:74] duration metric: took 150.570298ms to wait for pod list to return data ...
	I1128 04:51:42.340383 1366652 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:51:42.534711 1366652 default_sa.go:45] found service account: "default"
	I1128 04:51:42.534740 1366652 default_sa.go:55] duration metric: took 194.336793ms for default service account to be created ...
	I1128 04:51:42.534751 1366652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:51:42.738771 1366652 system_pods.go:86] 7 kube-system pods found
	I1128 04:51:42.738853 1366652 system_pods.go:89] "coredns-5dd5756b68-mxxmx" [dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5] Running
	I1128 04:51:42.738861 1366652 system_pods.go:89] "etcd-pause-143970" [d81b82e6-028a-45bc-b55e-475ff9009100] Running
	I1128 04:51:42.738866 1366652 system_pods.go:89] "kindnet-nxh4c" [f4689f10-5d67-46e1-85cb-7aadda9b847b] Running
	I1128 04:51:42.738902 1366652 system_pods.go:89] "kube-apiserver-pause-143970" [f0b85ba8-5b8e-4e71-9de4-9f326e2bbf21] Running
	I1128 04:51:42.738910 1366652 system_pods.go:89] "kube-controller-manager-pause-143970" [650191f9-4cf2-4f67-96ac-e7f783a16dda] Running
	I1128 04:51:42.738916 1366652 system_pods.go:89] "kube-proxy-l29m5" [864ee813-0e93-434e-8930-250e69f33cfe] Running
	I1128 04:51:42.738920 1366652 system_pods.go:89] "kube-scheduler-pause-143970" [de7ca15c-b7ce-4155-8977-5e9fb41af6af] Running
	I1128 04:51:42.738928 1366652 system_pods.go:126] duration metric: took 204.171344ms to wait for k8s-apps to be running ...
	I1128 04:51:42.738937 1366652 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:51:42.739005 1366652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:51:42.757301 1366652 system_svc.go:56] duration metric: took 18.355869ms WaitForService to wait for kubelet.
	I1128 04:51:42.757342 1366652 kubeadm.go:581] duration metric: took 2.287924128s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:51:42.757361 1366652 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:51:42.935332 1366652 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 04:51:42.935360 1366652 node_conditions.go:123] node cpu capacity is 2
	I1128 04:51:42.935372 1366652 node_conditions.go:105] duration metric: took 177.980668ms to run NodePressure ...
	I1128 04:51:42.935384 1366652 start.go:228] waiting for startup goroutines ...
	I1128 04:51:42.935392 1366652 start.go:233] waiting for cluster config update ...
	I1128 04:51:42.935399 1366652 start.go:242] writing updated cluster config ...
	I1128 04:51:42.935694 1366652 ssh_runner.go:195] Run: rm -f paused
	I1128 04:51:43.024283 1366652 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:51:43.026867 1366652 out.go:177] * Done! kubectl is now configured to use "pause-143970" cluster and "default" namespace by default
	I1128 04:51:39.982726 1370266 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1128 04:51:40.036012 1370266 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-541146 --format={{.State.Status}}
	I1128 04:51:40.068337 1370266 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1128 04:51:40.068365 1370266 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-541146 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1128 04:51:40.171181 1370266 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-541146 --format={{.State.Status}}
	I1128 04:51:40.208521 1370266 machine.go:88] provisioning docker machine ...
	I1128 04:51:40.208558 1370266 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-541146"
	I1128 04:51:40.208639 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:40.251084 1370266 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:40.251532 1370266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34483 <nil> <nil>}
	I1128 04:51:40.251548 1370266 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-541146 && echo "kubernetes-upgrade-541146" | sudo tee /etc/hostname
	I1128 04:51:40.493134 1370266 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-541146
	
	I1128 04:51:40.493236 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:40.525297 1370266 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:40.525715 1370266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34483 <nil> <nil>}
	I1128 04:51:40.525736 1370266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-541146' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-541146/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-541146' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:51:40.686226 1370266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:51:40.686262 1370266 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:51:40.686283 1370266 ubuntu.go:177] setting up certificates
	I1128 04:51:40.686293 1370266 provision.go:83] configureAuth start
	I1128 04:51:40.686363 1370266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-541146
	I1128 04:51:40.714095 1370266 provision.go:138] copyHostCerts
	I1128 04:51:40.714189 1370266 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:51:40.714197 1370266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:51:40.714281 1370266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:51:40.714374 1370266 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:51:40.714379 1370266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:51:40.714458 1370266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:51:40.714510 1370266 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:51:40.714514 1370266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:51:40.714538 1370266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:51:40.714579 1370266 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-541146 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-541146]
	I1128 04:51:41.294928 1370266 provision.go:172] copyRemoteCerts
	I1128 04:51:41.295025 1370266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:51:41.295105 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:41.313213 1370266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34483 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa Username:docker}
	I1128 04:51:41.416059 1370266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:51:41.446807 1370266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1128 04:51:41.479074 1370266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 04:51:41.509278 1370266 provision.go:86] duration metric: configureAuth took 822.962883ms
	I1128 04:51:41.509305 1370266 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:51:41.509530 1370266 config.go:182] Loaded profile config "kubernetes-upgrade-541146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:51:41.509647 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:41.528115 1370266 main.go:141] libmachine: Using SSH client type: native
	I1128 04:51:41.528620 1370266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34483 <nil> <nil>}
	I1128 04:51:41.528641 1370266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:51:41.883223 1370266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:51:41.883322 1370266 machine.go:91] provisioned docker machine in 1.674776375s
	I1128 04:51:41.883351 1370266 client.go:171] LocalClient.Create took 11.942502527s
	I1128 04:51:41.883400 1370266 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-541146" took 11.942598215s
	I1128 04:51:41.883442 1370266 start.go:300] post-start starting for "kubernetes-upgrade-541146" (driver="docker")
	I1128 04:51:41.883471 1370266 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:51:41.883575 1370266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:51:41.883653 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:41.903829 1370266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34483 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa Username:docker}
	I1128 04:51:42.000057 1370266 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:51:42.019473 1370266 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:51:42.019518 1370266 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:51:42.019531 1370266 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:51:42.019540 1370266 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 04:51:42.019555 1370266 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:51:42.019631 1370266 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:51:42.019723 1370266 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:51:42.019851 1370266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:51:42.033306 1370266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:51:42.071865 1370266 start.go:303] post-start completed in 188.385176ms
	I1128 04:51:42.072379 1370266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-541146
	I1128 04:51:42.100299 1370266 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/config.json ...
	I1128 04:51:42.100618 1370266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:51:42.100682 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:42.128466 1370266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34483 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa Username:docker}
	I1128 04:51:42.241848 1370266 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:51:42.250450 1370266 start.go:128] duration metric: createHost completed in 12.314759247s
	I1128 04:51:42.250477 1370266 start.go:83] releasing machines lock for "kubernetes-upgrade-541146", held for 12.314911369s
	I1128 04:51:42.250558 1370266 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-541146
	I1128 04:51:42.283018 1370266 ssh_runner.go:195] Run: cat /version.json
	I1128 04:51:42.283078 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:42.283572 1370266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:51:42.283647 1370266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-541146
	I1128 04:51:42.307497 1370266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34483 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa Username:docker}
	I1128 04:51:42.331467 1370266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34483 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/kubernetes-upgrade-541146/id_rsa Username:docker}
	I1128 04:51:42.417602 1370266 ssh_runner.go:195] Run: systemctl --version
	I1128 04:51:42.557405 1370266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:51:42.711645 1370266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:51:42.717656 1370266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:51:42.753128 1370266 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:51:42.753256 1370266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:51:42.802001 1370266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1128 04:51:42.802072 1370266 start.go:472] detecting cgroup driver to use...
	I1128 04:51:42.802118 1370266 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:51:42.802203 1370266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:51:42.824521 1370266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:51:42.839133 1370266 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:51:42.839273 1370266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:51:42.856464 1370266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:51:42.874818 1370266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:51:43.002143 1370266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:51:43.192311 1370266 docker.go:219] disabling docker service ...
	I1128 04:51:43.192388 1370266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:51:43.223248 1370266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:51:43.239968 1370266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:51:43.385128 1370266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:51:43.517120 1370266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:51:43.533577 1370266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:51:43.556369 1370266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 04:51:43.556438 1370266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:51:43.569468 1370266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:51:43.569540 1370266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:51:43.582704 1370266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:51:43.596256 1370266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:51:43.610088 1370266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:51:43.622360 1370266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:51:43.634630 1370266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:51:43.646629 1370266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:51:43.768850 1370266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:51:43.958757 1370266 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:51:43.958841 1370266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:51:43.968204 1370266 start.go:540] Will wait 60s for crictl version
	I1128 04:51:43.968271 1370266 ssh_runner.go:195] Run: which crictl
	I1128 04:51:43.973638 1370266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:51:44.022731 1370266 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1128 04:51:44.022834 1370266 ssh_runner.go:195] Run: crio --version
	I1128 04:51:44.085328 1370266 ssh_runner.go:195] Run: crio --version
	I1128 04:51:44.145118 1370266 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.6 ...
	I1128 04:51:44.146825 1370266 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-541146 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 04:51:44.171745 1370266 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1128 04:51:44.176641 1370266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:51:44.192809 1370266 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:51:44.192883 1370266 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:51:44.259196 1370266 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1128 04:51:44.259312 1370266 ssh_runner.go:195] Run: which lz4
	I1128 04:51:44.264780 1370266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:51:44.270379 1370266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:51:44.270418 1370266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (424042838 bytes)
	
	* 
	* ==> CRI-O <==
	* Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.044650949Z" level=info msg="Creating container: kube-system/kube-proxy-l29m5/kube-proxy" id=cc02b490-cd55-428c-b535-188da4cba81e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.045005885Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.067125427Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bb328f8057f7d61623a761d19ea77aff23acdb689ad5d672684256891669f27a/merged/etc/passwd: no such file or directory"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.067176003Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bb328f8057f7d61623a761d19ea77aff23acdb689ad5d672684256891669f27a/merged/etc/group: no such file or directory"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.249312129Z" level=info msg="Created container e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16: kube-system/coredns-5dd5756b68-mxxmx/coredns" id=3fb1f18d-1a79-4f39-a4a7-ac22480e0530 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.249897160Z" level=info msg="Starting container: e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16" id=4bee7487-00ea-4bd6-b12e-a26224acdcb1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.270270471Z" level=info msg="Created container f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09: kube-system/kube-proxy-l29m5/kube-proxy" id=cc02b490-cd55-428c-b535-188da4cba81e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.271070565Z" level=info msg="Starting container: f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09" id=c74281ea-7090-49f4-b97c-28c9ca2a2e90 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.277127354Z" level=info msg="Started container" PID=3258 containerID=e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16 description=kube-system/coredns-5dd5756b68-mxxmx/coredns id=4bee7487-00ea-4bd6-b12e-a26224acdcb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f7b41abe2e8d557d9cf90ace79dce20ab688613318ff481a2cd638503ca0644b
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.285441872Z" level=info msg="Created container 35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84: kube-system/kindnet-nxh4c/kindnet-cni" id=2a173381-050b-4f70-91e1-dc2ef9ff9fd6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.286228658Z" level=info msg="Starting container: 35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84" id=be372780-48bb-42fa-9579-14537bcecad9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.308700895Z" level=info msg="Started container" PID=3266 containerID=35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84 description=kube-system/kindnet-nxh4c/kindnet-cni id=be372780-48bb-42fa-9579-14537bcecad9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=643404fca0a2c7364e34b414e70c3b01b8be086d6b2664b64903c920c9d1c831
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.341430779Z" level=info msg="Started container" PID=3264 containerID=f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09 description=kube-system/kube-proxy-l29m5/kube-proxy id=c74281ea-7090-49f4-b97c-28c9ca2a2e90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7889be2eb2c8aa6d63702441b59ff5e7c20bfa178ce7c0bac9eab1235acbbe6a
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.707431169Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.721008248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.721043308Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.721058422Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.738045855Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.738087332Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.793496147Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.825321240Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.825355742Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.825372718Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.860081250Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 28 04:51:28 pause-143970 crio[2558]: time="2023-11-28 04:51:28.860118731Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	35159b6382dd9       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   21 seconds ago      Running             kindnet-cni               2                   643404fca0a2c       kindnet-nxh4c
	f06f5f2557e36       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   21 seconds ago      Running             kube-proxy                2                   7889be2eb2c8a       kube-proxy-l29m5
	e709ce489bc40       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   21 seconds ago      Running             coredns                   2                   f7b41abe2e8d5       coredns-5dd5756b68-mxxmx
	c8bd390ee9f62       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   27 seconds ago      Running             kube-controller-manager   2                   c277ccb679063       kube-controller-manager-pause-143970
	09f9bf7b4d6c3       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   27 seconds ago      Running             etcd                      2                   ec39115b318dd       etcd-pause-143970
	70d0cdc81c3fc       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   27 seconds ago      Running             kube-apiserver            2                   fbb3db5147c09       kube-apiserver-pause-143970
	329e40cdef7b2       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   27 seconds ago      Running             kube-scheduler            2                   7ded19db045a1       kube-scheduler-pause-143970
	194b96a9b24c3       05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54   52 seconds ago      Exited              kube-scheduler            1                   7ded19db045a1       kube-scheduler-pause-143970
	b43da3b0d1180       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   52 seconds ago      Exited              kindnet-cni               1                   643404fca0a2c       kindnet-nxh4c
	b03030592f77e       9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b   52 seconds ago      Exited              kube-controller-manager   1                   c277ccb679063       kube-controller-manager-pause-143970
	74c29b25ba174       04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419   52 seconds ago      Exited              kube-apiserver            1                   fbb3db5147c09       kube-apiserver-pause-143970
	8448df08bddbd       3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39   52 seconds ago      Exited              kube-proxy                1                   7889be2eb2c8a       kube-proxy-l29m5
	323f5946e5014       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   52 seconds ago      Exited              etcd                      1                   ec39115b318dd       etcd-pause-143970
	16891782906b6       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   52 seconds ago      Exited              coredns                   1                   f7b41abe2e8d5       coredns-5dd5756b68-mxxmx
	
	* 
	* ==> coredns [16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51852 - 53916 "HINFO IN 227728516948944030.7492445755128998669. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021311119s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [e709ce489bc40cb16a6910cd5959260baae04364470ee0fdbe2be9ab52a32f16] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56033 - 56771 "HINFO IN 5308972760632549043.379345747403652935. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.046246065s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-143970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-143970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=pause-143970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_50_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:49:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-143970
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:51:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:49:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:49:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:49:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:51:27 +0000   Tue, 28 Nov 2023 04:50:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-143970
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 12c93c2ee1834eb08542e13124f46040
	  System UUID:                451828b9-82a9-4939-ade2-4b87426ccaea
	  Boot ID:                    29ce650a-e22a-4e0d-bffe-126490eafcf6
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-mxxmx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     99s
	  kube-system                 etcd-pause-143970                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kindnet-nxh4c                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      99s
	  kube-system                 kube-apiserver-pause-143970             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-pause-143970    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-l29m5                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-scheduler-pause-143970             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 97s                  kube-proxy       
	  Normal  Starting                 21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node pause-143970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node pause-143970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node pause-143970 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node pause-143970 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node pause-143970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node pause-143970 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           100s                 node-controller  Node pause-143970 event: Registered Node pause-143970 in Controller
	  Normal  NodeReady                67s                  kubelet          Node pause-143970 status is now: NodeReady
	  Normal  Starting                 29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 29s)    kubelet          Node pause-143970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 29s)    kubelet          Node pause-143970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x8 over 29s)    kubelet          Node pause-143970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                   node-controller  Node pause-143970 event: Registered Node pause-143970 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001139] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000745] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001123] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +0.003665] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001013] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=0000000000e844bc
	[  +0.001111] FS-Cache: O-key=[8] '4f415c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000975] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=00000000dbb2bfbf
	[  +0.001088] FS-Cache: N-key=[8] '4f415c0100000000'
	[  +2.166969] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001027] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000001868d326
	[  +0.001142] FS-Cache: O-key=[8] '4e415c0100000000'
	[  +0.000817] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000008c3c2dae
	[  +0.001108] FS-Cache: N-key=[8] '4e415c0100000000'
	[  +0.392945] FS-Cache: Duplicate cookie detected
	[  +0.000738] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001131] FS-Cache: O-cookie d=00000000eda2eaf1{9p.inode} n=000000007d358928
	[  +0.001121] FS-Cache: O-key=[8] '54415c0100000000'
	[  +0.000768] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000eda2eaf1{9p.inode} n=000000007d93f4ca
	[  +0.001181] FS-Cache: N-key=[8] '54415c0100000000'
	
	* 
	* ==> etcd [09f9bf7b4d6c3b8a21b9d36ce96eede803014f28ccc7125c66e0827be2e5dc1f] <==
	* {"level":"info","ts":"2023-11-28T04:51:22.142167Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T04:51:22.142178Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T04:51:22.151924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-11-28T04:51:22.152057Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-11-28T04:51:22.152332Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:51:22.152377Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:51:22.167073Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T04:51:22.167274Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T04:51:22.167306Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T04:51:22.167376Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-11-28T04:51:22.16739Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-11-28T04:51:23.069548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-28T04:51:23.069606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:51:23.069623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-11-28T04:51:23.069636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.069645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.069655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.069664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-11-28T04:51:23.072963Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-143970 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:51:23.072996Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:51:23.074316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-11-28T04:51:23.073041Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:51:23.075492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:51:23.085743Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:51:23.085773Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [323f5946e501407ad3687ce00fff2eb65b24c26990114b1e71be2f80053ecb20] <==
	* 
	* 
	* ==> kernel <==
	*  04:51:50 up  7:34,  0 users,  load average: 3.86, 2.66, 2.13
	Linux pause-143970 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [35159b6382dd9dc26afcfb62439a2e2dfeb8f1d83724425e2321e6b515b0af84] <==
	* I1128 04:51:28.396157       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1128 04:51:28.396221       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1128 04:51:28.396358       1 main.go:116] setting mtu 1500 for CNI 
	I1128 04:51:28.396368       1 main.go:146] kindnetd IP family: "ipv4"
	I1128 04:51:28.396380       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1128 04:51:28.704416       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1128 04:51:28.704449       1 main.go:227] handling current node
	I1128 04:51:38.809542       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1128 04:51:38.809680       1 main.go:227] handling current node
	I1128 04:51:48.828117       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1128 04:51:48.828230       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868] <==
	* I1128 04:50:57.240409       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1128 04:50:57.240508       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1128 04:50:57.242011       1 main.go:116] setting mtu 1500 for CNI 
	I1128 04:50:57.242045       1 main.go:146] kindnetd IP family: "ipv4"
	I1128 04:50:57.242058       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [70d0cdc81c3fc749e2f656a70e54e4378cb5290ea907d9871dfbcb9586c7df78] <==
	* I1128 04:51:27.269504       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1128 04:51:27.270114       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 04:51:27.270278       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1128 04:51:27.235798       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1128 04:51:27.235825       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 04:51:27.558029       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 04:51:27.560493       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1128 04:51:27.569194       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1128 04:51:27.570120       1 shared_informer.go:318] Caches are synced for configmaps
	I1128 04:51:27.583093       1 aggregator.go:166] initial CRD sync complete...
	I1128 04:51:27.583177       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 04:51:27.583208       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 04:51:27.583261       1 cache.go:39] Caches are synced for autoregister controller
	I1128 04:51:27.645190       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1128 04:51:27.645286       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1128 04:51:27.654109       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1128 04:51:27.657246       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 04:51:27.665207       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1128 04:51:27.670669       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1128 04:51:28.305337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 04:51:30.109304       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1128 04:51:30.341021       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 04:51:30.357028       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 04:51:30.441048       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 04:51:30.451766       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [74c29b25ba1740d239ce880426025df4d31bcbbb0155303eb8150e8dffcd65ee] <==
	* 
	* 
	* ==> kube-controller-manager [b03030592f77ee33e63579e2d47bc9e4496b8244efd05ee4fd4fe065961e999f] <==
	* 
	* 
	* ==> kube-controller-manager [c8bd390ee9f62c921e5e42f68c3cf5899edaaff707741d4264e01cc3df054ca5] <==
	* I1128 04:51:40.181608       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1128 04:51:40.181639       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1128 04:51:40.181670       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1128 04:51:40.201979       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1128 04:51:40.209121       1 shared_informer.go:318] Caches are synced for HPA
	I1128 04:51:40.213033       1 shared_informer.go:318] Caches are synced for disruption
	I1128 04:51:40.213084       1 shared_informer.go:318] Caches are synced for stateful set
	I1128 04:51:40.223740       1 shared_informer.go:318] Caches are synced for persistent volume
	I1128 04:51:40.230558       1 shared_informer.go:318] Caches are synced for PVC protection
	I1128 04:51:40.244379       1 shared_informer.go:318] Caches are synced for resource quota
	I1128 04:51:40.248421       1 shared_informer.go:318] Caches are synced for attach detach
	I1128 04:51:40.272372       1 shared_informer.go:318] Caches are synced for expand
	I1128 04:51:40.279249       1 shared_informer.go:318] Caches are synced for ephemeral
	I1128 04:51:40.284181       1 shared_informer.go:318] Caches are synced for daemon sets
	I1128 04:51:40.327986       1 shared_informer.go:318] Caches are synced for resource quota
	I1128 04:51:40.328791       1 shared_informer.go:318] Caches are synced for taint
	I1128 04:51:40.328911       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1128 04:51:40.329015       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-143970"
	I1128 04:51:40.329068       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1128 04:51:40.329098       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I1128 04:51:40.329135       1 taint_manager.go:210] "Sending events to api server"
	I1128 04:51:40.329824       1 event.go:307] "Event occurred" object="pause-143970" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-143970 event: Registered Node pause-143970 in Controller"
	I1128 04:51:40.615686       1 shared_informer.go:318] Caches are synced for garbage collector
	I1128 04:51:40.677214       1 shared_informer.go:318] Caches are synced for garbage collector
	I1128 04:51:40.677267       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0] <==
	* I1128 04:50:57.381070       1 server_others.go:69] "Using iptables proxy"
	E1128 04:50:57.383763       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-143970": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [f06f5f2557e36b4b527398f0ecf47a9211a9549b8582eb05f90612db8857ef09] <==
	* I1128 04:51:28.421124       1 server_others.go:69] "Using iptables proxy"
	I1128 04:51:28.454747       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1128 04:51:28.512153       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1128 04:51:28.514595       1 server_others.go:152] "Using iptables Proxier"
	I1128 04:51:28.514701       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1128 04:51:28.514743       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1128 04:51:28.514861       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:51:28.515133       1 server.go:846] "Version info" version="v1.28.4"
	I1128 04:51:28.515377       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:51:28.516531       1 config.go:188] "Starting service config controller"
	I1128 04:51:28.516622       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:51:28.516896       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:51:28.516940       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:51:28.517482       1 config.go:315] "Starting node config controller"
	I1128 04:51:28.517539       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:51:28.616989       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:51:28.617088       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:51:28.617621       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [194b96a9b24c3ad46992cea1fe214362341f776d26e215afb798198dea1f6708] <==
	* 
	* 
	* ==> kube-scheduler [329e40cdef7b24c9817f8524403cf0fe3faa1f6b212525287fd1b82f14448b77] <==
	* I1128 04:51:24.522740       1 serving.go:348] Generated self-signed cert in-memory
	W1128 04:51:27.466778       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 04:51:27.466885       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:51:27.466918       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 04:51:27.466957       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 04:51:27.586180       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 04:51:27.586285       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:51:27.601573       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 04:51:27.601686       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 04:51:27.602255       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 04:51:27.604563       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 04:51:27.702249       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 28 04:51:21 pause-143970 kubelet[3002]: W1128 04:51:21.988689    3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:21 pause-143970 kubelet[3002]: E1128 04:51:21.988765    3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:22 pause-143970 kubelet[3002]: W1128 04:51:22.070584    3002 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-143970&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:22 pause-143970 kubelet[3002]: E1128 04:51:22.070876    3002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-143970&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Nov 28 04:51:22 pause-143970 kubelet[3002]: E1128 04:51:22.153941    3002 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-143970?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="1.6s"
	Nov 28 04:51:22 pause-143970 kubelet[3002]: I1128 04:51:22.260019    3002 kubelet_node_status.go:70] "Attempting to register node" node="pause-143970"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.674825    3002 kubelet_node_status.go:108] "Node was previously registered" node="pause-143970"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.674943    3002 kubelet_node_status.go:73] "Successfully registered node" node="pause-143970"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.677213    3002 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.678099    3002 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.717245    3002 apiserver.go:52] "Watching apiserver"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.721108    3002 topology_manager.go:215] "Topology Admit Handler" podUID="864ee813-0e93-434e-8930-250e69f33cfe" podNamespace="kube-system" podName="kube-proxy-l29m5"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.723176    3002 topology_manager.go:215] "Topology Admit Handler" podUID="dc06ee0a-8c8a-47c3-875d-b9ed1b3cddc5" podNamespace="kube-system" podName="coredns-5dd5756b68-mxxmx"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.723254    3002 topology_manager.go:215] "Topology Admit Handler" podUID="f4689f10-5d67-46e1-85cb-7aadda9b847b" podNamespace="kube-system" podName="kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.745183    3002 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812071    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/864ee813-0e93-434e-8930-250e69f33cfe-lib-modules\") pod \"kube-proxy-l29m5\" (UID: \"864ee813-0e93-434e-8930-250e69f33cfe\") " pod="kube-system/kube-proxy-l29m5"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812144    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4689f10-5d67-46e1-85cb-7aadda9b847b-lib-modules\") pod \"kindnet-nxh4c\" (UID: \"f4689f10-5d67-46e1-85cb-7aadda9b847b\") " pod="kube-system/kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812169    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f4689f10-5d67-46e1-85cb-7aadda9b847b-cni-cfg\") pod \"kindnet-nxh4c\" (UID: \"f4689f10-5d67-46e1-85cb-7aadda9b847b\") " pod="kube-system/kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812193    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4689f10-5d67-46e1-85cb-7aadda9b847b-xtables-lock\") pod \"kindnet-nxh4c\" (UID: \"f4689f10-5d67-46e1-85cb-7aadda9b847b\") " pod="kube-system/kindnet-nxh4c"
	Nov 28 04:51:27 pause-143970 kubelet[3002]: I1128 04:51:27.812262    3002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/864ee813-0e93-434e-8930-250e69f33cfe-xtables-lock\") pod \"kube-proxy-l29m5\" (UID: \"864ee813-0e93-434e-8930-250e69f33cfe\") " pod="kube-system/kube-proxy-l29m5"
	Nov 28 04:51:28 pause-143970 kubelet[3002]: I1128 04:51:28.024116    3002 scope.go:117] "RemoveContainer" containerID="b43da3b0d118015c2752da55bb14cee321bb35acf2de8a37da0256b87756d868"
	Nov 28 04:51:28 pause-143970 kubelet[3002]: I1128 04:51:28.025583    3002 scope.go:117] "RemoveContainer" containerID="16891782906b6f015fe70fe5b8c09870d8292f7ad1a602f315b525e8bce6209e"
	Nov 28 04:51:28 pause-143970 kubelet[3002]: I1128 04:51:28.033020    3002 scope.go:117] "RemoveContainer" containerID="8448df08bddbd0b24098d4dd35d852daefa9ec4389f764a94d32f4cef249b9e0"
	Nov 28 04:51:29 pause-143970 kubelet[3002]: I1128 04:51:29.937432    3002 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 28 04:51:32 pause-143970 kubelet[3002]: I1128 04:51:32.027357    3002 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-143970 -n pause-143970
helpers_test.go:261: (dbg) Run:  kubectl --context pause-143970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3060408961.exe start -p stopped-upgrade-779758 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.3060408961.exe start -p stopped-upgrade-779758 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.515705714s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.3060408961.exe -p stopped-upgrade-779758 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.3060408961.exe -p stopped-upgrade-779758 stop: (11.951075118s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-779758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-779758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.928860564s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-779758] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-779758 in cluster stopped-upgrade-779758
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-779758" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:53:16.662781 1378389 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:53:16.662951 1378389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:53:16.662961 1378389 out.go:309] Setting ErrFile to fd 2...
	I1128 04:53:16.662967 1378389 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:53:16.663267 1378389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:53:16.663657 1378389 out.go:303] Setting JSON to false
	I1128 04:53:16.664808 1378389 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27332,"bootTime":1701119865,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:53:16.664887 1378389 start.go:138] virtualization:  
	I1128 04:53:16.672641 1378389 out.go:177] * [stopped-upgrade-779758] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:53:16.679632 1378389 notify.go:220] Checking for updates...
	I1128 04:53:16.683217 1378389 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:53:16.685041 1378389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:53:16.687072 1378389 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:53:16.690135 1378389 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:53:16.691848 1378389 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:53:16.693629 1378389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:53:16.696173 1378389 config.go:182] Loaded profile config "stopped-upgrade-779758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:53:16.698603 1378389 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 04:53:16.700258 1378389 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:53:16.725218 1378389 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:53:16.725330 1378389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:53:16.811185 1378389 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:53:16.801482565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:53:16.811298 1378389 docker.go:295] overlay module found
	I1128 04:53:16.813421 1378389 out.go:177] * Using the docker driver based on existing profile
	I1128 04:53:16.815103 1378389 start.go:298] selected driver: docker
	I1128 04:53:16.815122 1378389 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-779758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-779758 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.239 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 04:53:16.815222 1378389 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:53:16.815855 1378389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:53:16.882781 1378389 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:53:16.87355692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:53:16.883149 1378389 cni.go:84] Creating CNI manager for ""
	I1128 04:53:16.883169 1378389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:53:16.883180 1378389 start_flags.go:323] config:
	{Name:stopped-upgrade-779758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-779758 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.239 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 04:53:16.885552 1378389 out.go:177] * Starting control plane node stopped-upgrade-779758 in cluster stopped-upgrade-779758
	I1128 04:53:16.887545 1378389 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:53:16.889414 1378389 out.go:177] * Pulling base image ...
	I1128 04:53:16.891044 1378389 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1128 04:53:16.891131 1378389 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1128 04:53:16.909181 1378389 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1128 04:53:16.909209 1378389 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1128 04:53:16.957077 1378389 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1128 04:53:16.957256 1378389 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/stopped-upgrade-779758/config.json ...
	I1128 04:53:16.957395 1378389 cache.go:107] acquiring lock: {Name:mka9a2e991eba10434a66f00ab2058fa051639a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957485 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 04:53:16.957494 1378389 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.837µs
	I1128 04:53:16.957504 1378389 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 04:53:16.957518 1378389 cache.go:194] Successfully downloaded all kic artifacts
	I1128 04:53:16.957532 1378389 cache.go:107] acquiring lock: {Name:mk26c91ba81ebf0485d3d2b4159504e48036386d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957562 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1128 04:53:16.957560 1378389 start.go:365] acquiring machines lock for stopped-upgrade-779758: {Name:mk9dc5fd85170248adc0daaa0a3aba1de7fda020 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957567 1378389 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 37.555µs
	I1128 04:53:16.957574 1378389 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1128 04:53:16.957584 1378389 cache.go:107] acquiring lock: {Name:mkbc5e1711f09ecc0008e09508e418955e4198e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957598 1378389 start.go:369] acquired machines lock for "stopped-upgrade-779758" in 24.648µs
	I1128 04:53:16.957611 1378389 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:53:16.957616 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1128 04:53:16.957617 1378389 fix.go:54] fixHost starting: 
	I1128 04:53:16.957621 1378389 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 38.654µs
	I1128 04:53:16.957628 1378389 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1128 04:53:16.957637 1378389 cache.go:107] acquiring lock: {Name:mk94967ebcdd58bf9df6dfb0725979bab0037761 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957666 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1128 04:53:16.957670 1378389 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 34.954µs
	I1128 04:53:16.957677 1378389 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1128 04:53:16.957685 1378389 cache.go:107] acquiring lock: {Name:mk5c96306b5f465374d8e9f4ce93d0675ac42790 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957709 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1128 04:53:16.957715 1378389 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 30.277µs
	I1128 04:53:16.957721 1378389 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1128 04:53:16.957729 1378389 cache.go:107] acquiring lock: {Name:mk673aa97dd638eab8157fa37803fc00f97475e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957753 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1128 04:53:16.957758 1378389 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 30.015µs
	I1128 04:53:16.957764 1378389 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1128 04:53:16.957772 1378389 cache.go:107] acquiring lock: {Name:mka6833120e0559f634e3494d0d83941d371ad59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957795 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1128 04:53:16.957799 1378389 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.717µs
	I1128 04:53:16.957805 1378389 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1128 04:53:16.957813 1378389 cache.go:107] acquiring lock: {Name:mk9966ff9e63ed5631198d157e2b6d006484f44a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:53:16.957837 1378389 cache.go:115] /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1128 04:53:16.957841 1378389 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 29.095µs
	I1128 04:53:16.957847 1378389 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1128 04:53:16.957853 1378389 cache.go:87] Successfully saved all images to host disk.
	I1128 04:53:16.957878 1378389 cli_runner.go:164] Run: docker container inspect stopped-upgrade-779758 --format={{.State.Status}}
	I1128 04:53:16.975831 1378389 fix.go:102] recreateIfNeeded on stopped-upgrade-779758: state=Stopped err=<nil>
	W1128 04:53:16.975863 1378389 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:53:16.978413 1378389 out.go:177] * Restarting existing docker container for "stopped-upgrade-779758" ...
	I1128 04:53:16.980259 1378389 cli_runner.go:164] Run: docker start stopped-upgrade-779758
	I1128 04:53:17.300632 1378389 cli_runner.go:164] Run: docker container inspect stopped-upgrade-779758 --format={{.State.Status}}
	I1128 04:53:17.324557 1378389 kic.go:430] container "stopped-upgrade-779758" state is running.
	I1128 04:53:17.325039 1378389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-779758
	I1128 04:53:17.352776 1378389 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/stopped-upgrade-779758/config.json ...
	I1128 04:53:17.353018 1378389 machine.go:88] provisioning docker machine ...
	I1128 04:53:17.353041 1378389 ubuntu.go:169] provisioning hostname "stopped-upgrade-779758"
	I1128 04:53:17.353103 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:17.378583 1378389 main.go:141] libmachine: Using SSH client type: native
	I1128 04:53:17.379030 1378389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34496 <nil> <nil>}
	I1128 04:53:17.379049 1378389 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-779758 && echo "stopped-upgrade-779758" | sudo tee /etc/hostname
	I1128 04:53:17.379852 1378389 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1128 04:53:20.533995 1378389 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-779758
	
	I1128 04:53:20.534074 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:20.554954 1378389 main.go:141] libmachine: Using SSH client type: native
	I1128 04:53:20.555375 1378389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34496 <nil> <nil>}
	I1128 04:53:20.555399 1378389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-779758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-779758/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-779758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:53:20.698043 1378389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:53:20.698082 1378389 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17671-1256059/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-1256059/.minikube}
	I1128 04:53:20.698105 1378389 ubuntu.go:177] setting up certificates
	I1128 04:53:20.698113 1378389 provision.go:83] configureAuth start
	I1128 04:53:20.698174 1378389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-779758
	I1128 04:53:20.717557 1378389 provision.go:138] copyHostCerts
	I1128 04:53:20.717634 1378389 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem, removing ...
	I1128 04:53:20.717665 1378389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem
	I1128 04:53:20.717747 1378389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.pem (1082 bytes)
	I1128 04:53:20.717858 1378389 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem, removing ...
	I1128 04:53:20.717867 1378389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem
	I1128 04:53:20.717895 1378389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/cert.pem (1123 bytes)
	I1128 04:53:20.717958 1378389 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem, removing ...
	I1128 04:53:20.717966 1378389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem
	I1128 04:53:20.717989 1378389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-1256059/.minikube/key.pem (1679 bytes)
	I1128 04:53:20.718036 1378389 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-779758 san=[192.168.59.239 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-779758]
	I1128 04:53:21.510510 1378389 provision.go:172] copyRemoteCerts
	I1128 04:53:21.510588 1378389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:53:21.510640 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:21.531332 1378389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34496 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/stopped-upgrade-779758/id_rsa Username:docker}
	I1128 04:53:21.634376 1378389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1128 04:53:21.659379 1378389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 04:53:21.683652 1378389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:53:21.707967 1378389 provision.go:86] duration metric: configureAuth took 1.009824316s
	I1128 04:53:21.707993 1378389 ubuntu.go:193] setting minikube options for container-runtime
	I1128 04:53:21.708177 1378389 config.go:182] Loaded profile config "stopped-upgrade-779758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1128 04:53:21.708352 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:21.728285 1378389 main.go:141] libmachine: Using SSH client type: native
	I1128 04:53:21.728737 1378389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 34496 <nil> <nil>}
	I1128 04:53:21.728761 1378389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:53:22.161389 1378389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:53:22.161418 1378389 machine.go:91] provisioned docker machine in 4.808381997s
	I1128 04:53:22.161429 1378389 start.go:300] post-start starting for "stopped-upgrade-779758" (driver="docker")
	I1128 04:53:22.161440 1378389 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:53:22.161512 1378389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:53:22.161565 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:22.180401 1378389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34496 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/stopped-upgrade-779758/id_rsa Username:docker}
	I1128 04:53:22.282219 1378389 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:53:22.286377 1378389 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 04:53:22.286407 1378389 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 04:53:22.286419 1378389 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 04:53:22.286427 1378389 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1128 04:53:22.286438 1378389 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/addons for local assets ...
	I1128 04:53:22.286494 1378389 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-1256059/.minikube/files for local assets ...
	I1128 04:53:22.286581 1378389 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem -> 12614152.pem in /etc/ssl/certs
	I1128 04:53:22.286688 1378389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:53:22.295810 1378389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/ssl/certs/12614152.pem --> /etc/ssl/certs/12614152.pem (1708 bytes)
	I1128 04:53:22.319208 1378389 start.go:303] post-start completed in 157.762286ms
	I1128 04:53:22.319298 1378389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:53:22.319345 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:22.342244 1378389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34496 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/stopped-upgrade-779758/id_rsa Username:docker}
	I1128 04:53:22.438744 1378389 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 04:53:22.444537 1378389 fix.go:56] fixHost completed within 5.486904492s
	I1128 04:53:22.444565 1378389 start.go:83] releasing machines lock for "stopped-upgrade-779758", held for 5.486959146s
	I1128 04:53:22.444681 1378389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-779758
	I1128 04:53:22.463035 1378389 ssh_runner.go:195] Run: cat /version.json
	I1128 04:53:22.463078 1378389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:53:22.463087 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:22.463146 1378389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-779758
	I1128 04:53:22.487823 1378389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34496 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/stopped-upgrade-779758/id_rsa Username:docker}
	I1128 04:53:22.495579 1378389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34496 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/stopped-upgrade-779758/id_rsa Username:docker}
	W1128 04:53:22.587676 1378389 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 04:53:22.587759 1378389 ssh_runner.go:195] Run: systemctl --version
	I1128 04:53:22.675114 1378389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:53:22.833107 1378389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 04:53:22.839226 1378389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:53:22.871330 1378389 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1128 04:53:22.871472 1378389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:53:22.919025 1378389 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:53:22.919050 1378389 start.go:472] detecting cgroup driver to use...
	I1128 04:53:22.919102 1378389 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 04:53:22.919187 1378389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:53:22.952272 1378389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:53:22.966267 1378389 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:53:22.966351 1378389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:53:22.980242 1378389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:53:22.994137 1378389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 04:53:23.009702 1378389 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 04:53:23.009801 1378389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:53:23.157161 1378389 docker.go:219] disabling docker service ...
	I1128 04:53:23.157278 1378389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:53:23.173027 1378389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:53:23.187038 1378389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:53:23.326734 1378389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:53:23.471320 1378389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:53:23.482987 1378389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:53:23.501663 1378389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1128 04:53:23.501761 1378389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:53:23.515025 1378389 out.go:177] 
	W1128 04:53:23.516800 1378389 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 04:53:23.516828 1378389 out.go:239] * 
	* 
	W1128 04:53:23.518159 1378389 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 04:53:23.520135 1378389 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-779758 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (88.40s)

                                                
                                    

Test pass (269/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.29
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.16
10 TestDownloadOnly/v1.28.4/json-events 9.2
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.1
17 TestDownloadOnly/v1.29.0-rc.0/json-events 11.37
18 TestDownloadOnly/v1.29.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.0/LogsDuration 0.24
23 TestDownloadOnly/DeleteAll 0.25
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
26 TestBinaryMirror 0.65
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 169.58
34 TestAddons/parallel/Registry 15.8
36 TestAddons/parallel/InspektorGadget 10.89
37 TestAddons/parallel/MetricsServer 5.94
40 TestAddons/parallel/CSI 48
41 TestAddons/parallel/Headlamp 14.29
42 TestAddons/parallel/CloudSpanner 5.64
43 TestAddons/parallel/LocalPath 9.13
44 TestAddons/parallel/NvidiaDevicePlugin 5.62
47 TestAddons/serial/GCPAuth/Namespaces 0.19
48 TestAddons/StoppedEnableDisable 12.47
49 TestCertOptions 36.26
50 TestCertExpiration 251.72
52 TestForceSystemdFlag 35.99
53 TestForceSystemdEnv 41.39
59 TestErrorSpam/setup 29.84
60 TestErrorSpam/start 0.96
61 TestErrorSpam/status 1.2
62 TestErrorSpam/pause 1.94
63 TestErrorSpam/unpause 2
64 TestErrorSpam/stop 1.53
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 77.55
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 33.42
71 TestFunctional/serial/KubeContext 0.08
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
76 TestFunctional/serial/CacheCmd/cache/add_local 1.1
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
81 TestFunctional/serial/CacheCmd/cache/delete 0.17
82 TestFunctional/serial/MinikubeKubectlCmd 0.17
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 30.12
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.88
87 TestFunctional/serial/LogsFileCmd 2.04
88 TestFunctional/serial/InvalidService 5.07
90 TestFunctional/parallel/ConfigCmd 0.64
91 TestFunctional/parallel/DashboardCmd 11.5
92 TestFunctional/parallel/DryRun 0.61
93 TestFunctional/parallel/InternationalLanguage 0.34
94 TestFunctional/parallel/StatusCmd 1.19
98 TestFunctional/parallel/ServiceCmdConnect 9.73
99 TestFunctional/parallel/AddonsCmd 0.23
100 TestFunctional/parallel/PersistentVolumeClaim 24.68
102 TestFunctional/parallel/SSHCmd 0.95
103 TestFunctional/parallel/CpCmd 1.86
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.44
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.9
114 TestFunctional/parallel/License 0.42
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
128 TestFunctional/parallel/ProfileCmd/profile_list 0.45
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 7.99
131 TestFunctional/parallel/ServiceCmd/List 0.65
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.62
134 TestFunctional/parallel/ServiceCmd/Format 0.51
135 TestFunctional/parallel/ServiceCmd/URL 0.44
136 TestFunctional/parallel/MountCmd/specific-port 2.6
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2.41
138 TestFunctional/parallel/Version/short 0.1
139 TestFunctional/parallel/Version/components 1.33
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
144 TestFunctional/parallel/ImageCommands/ImageBuild 3
145 TestFunctional/parallel/ImageCommands/Setup 1.85
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.66
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.21
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.46
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.95
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.32
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
156 TestFunctional/delete_addon-resizer_images 0.09
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
162 TestIngressAddonLegacy/StartLegacyK8sCluster 97.54
164 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.57
165 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.72
169 TestJSONOutput/start/Command 81.75
170 TestJSONOutput/start/Audit 0
172 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/pause/Command 0.85
176 TestJSONOutput/pause/Audit 0
178 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/unpause/Command 0.77
182 TestJSONOutput/unpause/Audit 0
184 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/stop/Command 5.95
188 TestJSONOutput/stop/Audit 0
190 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
192 TestErrorJSONOutput 0.27
194 TestKicCustomNetwork/create_custom_network 45.1
195 TestKicCustomNetwork/use_default_bridge_network 35.47
196 TestKicExistingNetwork 39.42
197 TestKicCustomSubnet 34
198 TestKicStaticIP 35.18
199 TestMainNoArgs 0.07
200 TestMinikubeProfile 75.44
203 TestMountStart/serial/StartWithMountFirst 7.44
204 TestMountStart/serial/VerifyMountFirst 0.31
205 TestMountStart/serial/StartWithMountSecond 6.76
206 TestMountStart/serial/VerifyMountSecond 0.31
207 TestMountStart/serial/DeleteFirst 1.68
208 TestMountStart/serial/VerifyMountPostDelete 0.3
209 TestMountStart/serial/Stop 1.25
210 TestMountStart/serial/RestartStopped 8.16
211 TestMountStart/serial/VerifyMountPostStop 0.3
214 TestMultiNode/serial/FreshStart2Nodes 125.3
215 TestMultiNode/serial/DeployApp2Nodes 5.64
217 TestMultiNode/serial/AddNode 51.49
218 TestMultiNode/serial/ProfileList 0.35
219 TestMultiNode/serial/CopyFile 11.49
220 TestMultiNode/serial/StopNode 2.41
221 TestMultiNode/serial/StartAfterStop 12.64
222 TestMultiNode/serial/RestartKeepsNodes 120.59
223 TestMultiNode/serial/DeleteNode 5.31
224 TestMultiNode/serial/StopMultiNode 24.16
225 TestMultiNode/serial/RestartMultiNode 80.85
226 TestMultiNode/serial/ValidateNameConflict 36.47
231 TestPreload 163.33
233 TestScheduledStopUnix 107.02
236 TestInsufficientStorage 11.36
239 TestKubernetesUpgrade 389.95
242 TestPause/serial/Start 94.47
244 TestStoppedBinaryUpgrade/Setup 0.96
246 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
256 TestNoKubernetes/serial/StartWithK8s 32.4
257 TestNoKubernetes/serial/StartWithStopK8s 17.76
258 TestNoKubernetes/serial/Start 8.87
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
260 TestNoKubernetes/serial/ProfileList 0.93
261 TestNoKubernetes/serial/Stop 1.25
262 TestNoKubernetes/serial/StartNoArgs 7.23
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
271 TestNetworkPlugins/group/false 4.56
276 TestStartStop/group/old-k8s-version/serial/FirstStart 127.73
278 TestStartStop/group/no-preload/serial/FirstStart 69.62
279 TestStartStop/group/old-k8s-version/serial/DeployApp 10.85
280 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.96
281 TestStartStop/group/old-k8s-version/serial/Stop 12.25
282 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
283 TestStartStop/group/old-k8s-version/serial/SecondStart 439.44
284 TestStartStop/group/no-preload/serial/DeployApp 9.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.42
286 TestStartStop/group/no-preload/serial/Stop 12.65
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
288 TestStartStop/group/no-preload/serial/SecondStart 354.26
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.37
292 TestStartStop/group/no-preload/serial/Pause 3.5
294 TestStartStop/group/embed-certs/serial/FirstStart 83.16
295 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.06
296 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
297 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.58
298 TestStartStop/group/old-k8s-version/serial/Pause 4.61
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.47
301 TestStartStop/group/embed-certs/serial/DeployApp 9.55
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
303 TestStartStop/group/embed-certs/serial/Stop 12.17
304 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
305 TestStartStop/group/embed-certs/serial/SecondStart 623.15
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.56
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.72
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.27
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 350.39
311 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.06
312 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
313 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.41
314 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.73
316 TestStartStop/group/newest-cni/serial/FirstStart 42.56
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
319 TestStartStop/group/newest-cni/serial/Stop 1.33
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
321 TestStartStop/group/newest-cni/serial/SecondStart 31.02
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
325 TestStartStop/group/newest-cni/serial/Pause 3.47
326 TestNetworkPlugins/group/auto/Start 75.81
327 TestNetworkPlugins/group/auto/KubeletFlags 0.35
328 TestNetworkPlugins/group/auto/NetCatPod 9.41
329 TestNetworkPlugins/group/auto/DNS 0.22
330 TestNetworkPlugins/group/auto/Localhost 0.21
331 TestNetworkPlugins/group/auto/HairPin 0.2
332 TestNetworkPlugins/group/kindnet/Start 82.41
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.6
336 TestStartStop/group/embed-certs/serial/Pause 5.19
337 TestNetworkPlugins/group/calico/Start 72.22
338 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
340 TestNetworkPlugins/group/kindnet/NetCatPod 12.55
341 TestNetworkPlugins/group/kindnet/DNS 0.37
342 TestNetworkPlugins/group/kindnet/Localhost 0.26
343 TestNetworkPlugins/group/kindnet/HairPin 0.28
344 TestNetworkPlugins/group/calico/ControllerPod 5.05
345 TestNetworkPlugins/group/calico/KubeletFlags 0.44
346 TestNetworkPlugins/group/calico/NetCatPod 13.58
347 TestNetworkPlugins/group/calico/DNS 0.25
348 TestNetworkPlugins/group/calico/Localhost 0.27
349 TestNetworkPlugins/group/calico/HairPin 0.25
350 TestNetworkPlugins/group/custom-flannel/Start 76.93
351 TestNetworkPlugins/group/enable-default-cni/Start 82.62
352 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.55
353 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.6
354 TestNetworkPlugins/group/custom-flannel/DNS 0.25
355 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
356 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.42
359 TestNetworkPlugins/group/flannel/Start 74.13
360 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
361 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
362 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
363 TestNetworkPlugins/group/bridge/Start 85.52
364 TestNetworkPlugins/group/flannel/ControllerPod 5.04
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
366 TestNetworkPlugins/group/flannel/NetCatPod 11.38
367 TestNetworkPlugins/group/flannel/DNS 0.25
368 TestNetworkPlugins/group/flannel/Localhost 0.21
369 TestNetworkPlugins/group/flannel/HairPin 0.22
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
371 TestNetworkPlugins/group/bridge/NetCatPod 10.34
372 TestNetworkPlugins/group/bridge/DNS 0.22
373 TestNetworkPlugins/group/bridge/Localhost 0.2
374 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (11.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-354322 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-354322 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.29139302s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-354322
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-354322: exit status 85 (164.120878ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-354322 | jenkins | v1.32.0 | 28 Nov 23 04:12 UTC |          |
	|         | -p download-only-354322        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:12:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:12:48.772709 1261420 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:12:48.772924 1261420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:12:48.772951 1261420 out.go:309] Setting ErrFile to fd 2...
	I1128 04:12:48.772970 1261420 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:12:48.773275 1261420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	W1128 04:12:48.773471 1261420 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17671-1256059/.minikube/config/config.json: open /home/jenkins/minikube-integration/17671-1256059/.minikube/config/config.json: no such file or directory
	I1128 04:12:48.773988 1261420 out.go:303] Setting JSON to true
	I1128 04:12:48.775074 1261420 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24904,"bootTime":1701119865,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:12:48.775191 1261420 start.go:138] virtualization:  
	I1128 04:12:48.778102 1261420 out.go:97] [download-only-354322] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:12:48.780204 1261420 out.go:169] MINIKUBE_LOCATION=17671
	W1128 04:12:48.778367 1261420 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball: no such file or directory
	I1128 04:12:48.778432 1261420 notify.go:220] Checking for updates...
	I1128 04:12:48.783974 1261420 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:12:48.785681 1261420 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:12:48.787260 1261420 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:12:48.788887 1261420 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1128 04:12:48.792558 1261420 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1128 04:12:48.792892 1261420 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:12:48.817495 1261420 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:12:48.817605 1261420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:12:48.897386 1261420 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-28 04:12:48.886802498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:12:48.897496 1261420 docker.go:295] overlay module found
	I1128 04:12:48.899540 1261420 out.go:97] Using the docker driver based on user configuration
	I1128 04:12:48.899564 1261420 start.go:298] selected driver: docker
	I1128 04:12:48.899576 1261420 start.go:902] validating driver "docker" against <nil>
	I1128 04:12:48.899687 1261420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:12:48.970410 1261420 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-28 04:12:48.960477276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:12:48.970579 1261420 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 04:12:48.970855 1261420 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1128 04:12:48.971012 1261420 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1128 04:12:48.972764 1261420 out.go:169] Using Docker driver with root privileges
	I1128 04:12:48.974580 1261420 cni.go:84] Creating CNI manager for ""
	I1128 04:12:48.974608 1261420 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:12:48.974621 1261420 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 04:12:48.974637 1261420 start_flags.go:323] config:
	{Name:download-only-354322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-354322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:12:48.976800 1261420 out.go:97] Starting control plane node download-only-354322 in cluster download-only-354322
	I1128 04:12:48.976837 1261420 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:12:48.978449 1261420 out.go:97] Pulling base image ...
	I1128 04:12:48.978475 1261420 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:12:48.978520 1261420 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:12:48.995788 1261420 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1128 04:12:48.996429 1261420 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1128 04:12:48.996529 1261420 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1128 04:12:49.047146 1261420 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1128 04:12:49.047178 1261420 cache.go:56] Caching tarball of preloaded images
	I1128 04:12:49.047800 1261420 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:12:49.050126 1261420 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1128 04:12:49.050156 1261420 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:12:49.172338 1261420 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1128 04:12:53.898977 1261420 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1128 04:12:55.862655 1261420 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:12:55.862783 1261420 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:12:56.873181 1261420 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1128 04:12:56.873627 1261420 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/download-only-354322/config.json ...
	I1128 04:12:56.873662 1261420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/download-only-354322/config.json: {Name:mka28d807ad31fcde37f96473ad4528c13af4da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:56.874436 1261420 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 04:12:56.875162 1261420 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-354322"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-354322 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-354322 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.19934905s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-354322
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-354322: exit status 85 (95.369601ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-354322 | jenkins | v1.32.0 | 28 Nov 23 04:12 UTC |          |
	|         | -p download-only-354322        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-354322 | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |          |
	|         | -p download-only-354322        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:13:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:13:00.301665 1261499 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:13:00.302217 1261499 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:13:00.302264 1261499 out.go:309] Setting ErrFile to fd 2...
	I1128 04:13:00.302289 1261499 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:13:00.302662 1261499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	W1128 04:13:00.302923 1261499 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17671-1256059/.minikube/config/config.json: open /home/jenkins/minikube-integration/17671-1256059/.minikube/config/config.json: no such file or directory
	I1128 04:13:00.303405 1261499 out.go:303] Setting JSON to true
	I1128 04:13:00.304651 1261499 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24915,"bootTime":1701119865,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:13:00.304820 1261499 start.go:138] virtualization:  
	I1128 04:13:00.307314 1261499 out.go:97] [download-only-354322] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:13:00.307818 1261499 notify.go:220] Checking for updates...
	I1128 04:13:00.312894 1261499 out.go:169] MINIKUBE_LOCATION=17671
	I1128 04:13:00.315891 1261499 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:13:00.317961 1261499 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:13:00.322826 1261499 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:13:00.327659 1261499 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1128 04:13:00.345638 1261499 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1128 04:13:00.346387 1261499 config.go:182] Loaded profile config "download-only-354322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1128 04:13:00.346499 1261499 start.go:810] api.Load failed for download-only-354322: filestore "download-only-354322": Docker machine "download-only-354322" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 04:13:00.346683 1261499 driver.go:378] Setting default libvirt URI to qemu:///system
	W1128 04:13:00.346749 1261499 start.go:810] api.Load failed for download-only-354322: filestore "download-only-354322": Docker machine "download-only-354322" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 04:13:00.393239 1261499 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:13:00.393369 1261499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:13:00.472981 1261499 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-28 04:13:00.462477919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:13:00.473092 1261499 docker.go:295] overlay module found
	I1128 04:13:00.474883 1261499 out.go:97] Using the docker driver based on existing profile
	I1128 04:13:00.474919 1261499 start.go:298] selected driver: docker
	I1128 04:13:00.474926 1261499 start.go:902] validating driver "docker" against &{Name:download-only-354322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-354322 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:13:00.475107 1261499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:13:00.543277 1261499 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-28 04:13:00.533872398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:13:00.543792 1261499 cni.go:84] Creating CNI manager for ""
	I1128 04:13:00.543810 1261499 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:13:00.543823 1261499 start_flags.go:323] config:
	{Name:download-only-354322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-354322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1128 04:13:00.545787 1261499 out.go:97] Starting control plane node download-only-354322 in cluster download-only-354322
	I1128 04:13:00.545813 1261499 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:13:00.547448 1261499 out.go:97] Pulling base image ...
	I1128 04:13:00.547474 1261499 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:13:00.547581 1261499 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:13:00.565528 1261499 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1128 04:13:00.565687 1261499 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1128 04:13:00.565709 1261499 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1128 04:13:00.565714 1261499 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1128 04:13:00.565722 1261499 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1128 04:13:00.660068 1261499 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	I1128 04:13:00.660111 1261499 cache.go:56] Caching tarball of preloaded images
	I1128 04:13:00.660279 1261499 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:13:00.662438 1261499 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1128 04:13:00.662467 1261499 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:13:00.777259 1261499 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:23e2271fd1a7b32f52ce36ae8363c081 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-354322"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/json-events (11.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-354322 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-354322 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.371582671s)
--- PASS: TestDownloadOnly/v1.29.0-rc.0/json-events (11.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-354322
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-354322: exit status 85 (243.128218ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-354322 | jenkins | v1.32.0 | 28 Nov 23 04:12 UTC |          |
	|         | -p download-only-354322           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-354322 | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |          |
	|         | -p download-only-354322           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-354322 | jenkins | v1.32.0 | 28 Nov 23 04:13 UTC |          |
	|         | -p download-only-354322           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:13:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:13:09.530836 1261574 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:13:09.530995 1261574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:13:09.531005 1261574 out.go:309] Setting ErrFile to fd 2...
	I1128 04:13:09.531011 1261574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:13:09.531278 1261574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	W1128 04:13:09.531451 1261574 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17671-1256059/.minikube/config/config.json: open /home/jenkins/minikube-integration/17671-1256059/.minikube/config/config.json: no such file or directory
	I1128 04:13:09.531709 1261574 out.go:303] Setting JSON to true
	I1128 04:13:09.532778 1261574 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":24924,"bootTime":1701119865,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:13:09.532856 1261574 start.go:138] virtualization:  
	I1128 04:13:09.535609 1261574 out.go:97] [download-only-354322] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:13:09.535931 1261574 notify.go:220] Checking for updates...
	I1128 04:13:09.538801 1261574 out.go:169] MINIKUBE_LOCATION=17671
	I1128 04:13:09.541236 1261574 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:13:09.543478 1261574 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:13:09.545430 1261574 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:13:09.547385 1261574 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1128 04:13:09.551407 1261574 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1128 04:13:09.551971 1261574 config.go:182] Loaded profile config "download-only-354322": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1128 04:13:09.552026 1261574 start.go:810] api.Load failed for download-only-354322: filestore "download-only-354322": Docker machine "download-only-354322" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 04:13:09.552133 1261574 driver.go:378] Setting default libvirt URI to qemu:///system
	W1128 04:13:09.552161 1261574 start.go:810] api.Load failed for download-only-354322: filestore "download-only-354322": Docker machine "download-only-354322" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 04:13:09.576179 1261574 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:13:09.576284 1261574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:13:09.653707 1261574 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-28 04:13:09.643880683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:13:09.653819 1261574 docker.go:295] overlay module found
	I1128 04:13:09.655843 1261574 out.go:97] Using the docker driver based on existing profile
	I1128 04:13:09.655864 1261574 start.go:298] selected driver: docker
	I1128 04:13:09.655870 1261574 start.go:902] validating driver "docker" against &{Name:download-only-354322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-354322 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:13:09.656035 1261574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:13:09.723757 1261574 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-28 04:13:09.714344172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:13:09.724224 1261574 cni.go:84] Creating CNI manager for ""
	I1128 04:13:09.724245 1261574 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1128 04:13:09.724258 1261574 start_flags.go:323] config:
	{Name:download-only-354322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:download-only-354322 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0
s GPUs:}
	I1128 04:13:09.726353 1261574 out.go:97] Starting control plane node download-only-354322 in cluster download-only-354322
	I1128 04:13:09.726395 1261574 cache.go:121] Beginning downloading kic base image for docker with crio
	I1128 04:13:09.728360 1261574 out.go:97] Pulling base image ...
	I1128 04:13:09.728385 1261574 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 04:13:09.728576 1261574 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 04:13:09.745330 1261574 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1128 04:13:09.745488 1261574 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1128 04:13:09.745512 1261574 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1128 04:13:09.745520 1261574 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1128 04:13:09.745528 1261574 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1128 04:13:09.793127 1261574 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I1128 04:13:09.793167 1261574 cache.go:56] Caching tarball of preloaded images
	I1128 04:13:09.793717 1261574 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 04:13:09.795955 1261574 out.go:97] Downloading Kubernetes v1.29.0-rc.0 preload ...
	I1128 04:13:09.795982 1261574 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:13:09.910914 1261574 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:2fbedfd2c2a9c642428164f4d73fb9c1 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4
	I1128 04:13:16.558672 1261574 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:13:16.558775 1261574 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-arm64.tar.lz4 ...
	I1128 04:13:17.433374 1261574 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.0 on crio
	I1128 04:13:17.433511 1261574 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/download-only-354322/config.json ...
	I1128 04:13:17.433753 1261574 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 04:13:17.433952 1261574 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17671-1256059/.minikube/cache/linux/arm64/v1.29.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-354322"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-354322
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-303745 --alsologtostderr --binary-mirror http://127.0.0.1:44979 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-303745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-303745
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-663058
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-663058: exit status 85 (85.540044ms)

                                                
                                                
-- stdout --
	* Profile "addons-663058" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663058"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-663058
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-663058: exit status 85 (97.090299ms)

                                                
                                                
-- stdout --
	* Profile "addons-663058" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663058"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (169.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-663058 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-663058 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m49.576547941s)
--- PASS: TestAddons/Setup (169.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 53.615195ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fxxqk" [d49b47c4-18b1-4fec-8b15-184bb5ff000a] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013607955s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-79xtf" [d52cab37-0553-4eb5-b573-7796b189da95] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014245628s
addons_test.go:339: (dbg) Run:  kubectl --context addons-663058 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-663058 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-663058 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.408158247s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 ip
2023/11/28 04:16:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ccl9x" [a3372c59-f7a3-438f-8fd3-4a52b72e1683] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015366035s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-663058
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-663058: (5.875149435s)
--- PASS: TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 13.34104ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xqlzx" [86203f39-30e9-4edc-8374-9cc756336a40] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.021950132s
addons_test.go:414: (dbg) Run:  kubectl --context addons-663058 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 53.406007ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-663058 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-663058 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fed93e03-d25c-458a-b39c-5f9968b56827] Pending
helpers_test.go:344: "task-pv-pod" [fed93e03-d25c-458a-b39c-5f9968b56827] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fed93e03-d25c-458a-b39c-5f9968b56827] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.024438351s
addons_test.go:583: (dbg) Run:  kubectl --context addons-663058 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-663058 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-663058 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-663058 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-663058 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-663058 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-663058 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ed01b38f-2713-48de-893e-440767a409f1] Pending
helpers_test.go:344: "task-pv-pod-restore" [ed01b38f-2713-48de-893e-440767a409f1] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.015643234s
addons_test.go:625: (dbg) Run:  kubectl --context addons-663058 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-663058 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-663058 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-663058 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.48592864s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:641: (dbg) Done: out/minikube-linux-arm64 -p addons-663058 addons disable volumesnapshots --alsologtostderr -v=1: (1.157671259s)
--- PASS: TestAddons/parallel/CSI (48.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-663058 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-663058 --alsologtostderr -v=1: (1.26147279s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-mt76q" [2769b26a-bd51-4b67-afe7-27bd2519c91b] Pending
helpers_test.go:344: "headlamp-777fd4b855-mt76q" [2769b26a-bd51-4b67-afe7-27bd2519c91b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-mt76q" [2769b26a-bd51-4b67-afe7-27bd2519c91b] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.024529357s
--- PASS: TestAddons/parallel/Headlamp (14.29s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-gvxg8" [4b62a475-90d9-47b1-9067-63806f661352] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012141908s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-663058
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-663058 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-663058 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663058 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [92e18f37-3eb5-4890-9c50-415c7ce39f8f] Pending
helpers_test.go:344: "test-local-path" [92e18f37-3eb5-4890-9c50-415c7ce39f8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [92e18f37-3eb5-4890-9c50-415c7ce39f8f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [92e18f37-3eb5-4890-9c50-415c7ce39f8f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.0105547s
addons_test.go:890: (dbg) Run:  kubectl --context addons-663058 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 ssh "cat /opt/local-path-provisioner/pvc-5d8f3d78-96c7-45ba-a454-abb0965b117c_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-663058 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-663058 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-663058 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5rx4b" [c5901cae-07eb-478c-8959-5d32467d77ac] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.020518568s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-663058
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-663058 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-663058 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-663058
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-663058: (12.139778924s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-663058
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-663058
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-663058
--- PASS: TestAddons/StoppedEnableDisable (12.47s)

                                                
                                    
x
+
TestCertOptions (36.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-175575 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1128 04:58:53.034701 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-175575 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.33874752s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-175575 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-175575 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-175575 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-175575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-175575
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-175575: (2.174118557s)
--- PASS: TestCertOptions (36.26s)

                                                
                                    
x
+
TestCertExpiration (251.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-908952 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1128 04:56:51.638076 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-908952 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.20353467s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-908952 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-908952 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.014460568s)
helpers_test.go:175: Cleaning up "cert-expiration-908952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-908952
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-908952: (2.499153782s)
--- PASS: TestCertExpiration (251.72s)

                                                
                                    
x
+
TestForceSystemdFlag (35.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-716820 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-716820 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.154697702s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-716820 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-716820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-716820
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-716820: (2.477554086s)
--- PASS: TestForceSystemdFlag (35.99s)

                                                
                                    
x
+
TestForceSystemdEnv (41.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-492582 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-492582 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.813525036s)
helpers_test.go:175: Cleaning up "force-systemd-env-492582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-492582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-492582: (2.577579417s)
--- PASS: TestForceSystemdEnv (41.39s)

                                                
                                    
x
+
TestErrorSpam/setup (29.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-074468 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-074468 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-074468 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-074468 --driver=docker  --container-runtime=crio: (29.840528904s)
--- PASS: TestErrorSpam/setup (29.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 start --dry-run
--- PASS: TestErrorSpam/start (0.96s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 pause
--- PASS: TestErrorSpam/pause (1.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (2s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 unpause
--- PASS: TestErrorSpam/unpause (2.00s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 stop: (1.284708328s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-074468 --log_dir /tmp/nospam-074468 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17671-1256059/.minikube/files/etc/test/nested/copy/1261415/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-789811 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1128 04:21:13.956814 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:21:15.237435 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:21:17.798878 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:21:22.919062 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:21:33.159345 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:21:53.639556 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-789811 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.553783555s)
--- PASS: TestFunctional/serial/StartWithProxy (77.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.42s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-789811 --alsologtostderr -v=8
E1128 04:22:34.601175 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-789811 --alsologtostderr -v=8: (33.41904634s)
functional_test.go:659: soft start took 33.420235717s for "functional-789811" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.42s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-789811 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 cache add registry.k8s.io/pause:3.1: (1.32196824s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 cache add registry.k8s.io/pause:3.3: (1.333299349s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 cache add registry.k8s.io/pause:latest: (1.296380551s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-789811 /tmp/TestFunctionalserialCacheCmdcacheadd_local415126849/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cache add minikube-local-cache-test:functional-789811
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cache delete minikube-local-cache-test:functional-789811
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-789811
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (331.939739ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 cache reload: (1.13106334s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 kubectl -- --context functional-789811 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-789811 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-789811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-789811 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.121593489s)
functional_test.go:757: restart took 30.121689932s for "functional-789811" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (30.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-789811 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 logs: (1.878943742s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 logs --file /tmp/TestFunctionalserialLogsFileCmd3075345435/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 logs --file /tmp/TestFunctionalserialLogsFileCmd3075345435/001/logs.txt: (2.039636956s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.04s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-789811 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-789811
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-789811: exit status 115 (758.312266ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32742 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-789811 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 config get cpus: exit status 14 (130.471749ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 config get cpus: exit status 14 (81.58896ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-789811 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-789811 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1286791: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-789811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-789811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (286.621629ms)

                                                
                                                
-- stdout --
	* [functional-789811] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:24:23.910974 1286419 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:24:23.911303 1286419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:24:23.911325 1286419 out.go:309] Setting ErrFile to fd 2...
	I1128 04:24:23.911332 1286419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:24:23.911588 1286419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:24:23.911969 1286419 out.go:303] Setting JSON to false
	I1128 04:24:23.917321 1286419 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25599,"bootTime":1701119865,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:24:23.917410 1286419 start.go:138] virtualization:  
	I1128 04:24:23.922166 1286419 out.go:177] * [functional-789811] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:24:23.925127 1286419 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:24:23.925173 1286419 notify.go:220] Checking for updates...
	I1128 04:24:23.929066 1286419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:24:23.930831 1286419 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:24:23.932569 1286419 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:24:23.934090 1286419 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:24:23.935544 1286419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:24:23.937714 1286419 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:24:23.938272 1286419 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:24:23.961897 1286419 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:24:23.962020 1286419 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:24:24.066276 1286419 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-28 04:24:24.056100309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:24:24.066397 1286419 docker.go:295] overlay module found
	I1128 04:24:24.069712 1286419 out.go:177] * Using the docker driver based on existing profile
	I1128 04:24:24.071284 1286419 start.go:298] selected driver: docker
	I1128 04:24:24.071300 1286419 start.go:902] validating driver "docker" against &{Name:functional-789811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-789811 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:24:24.071421 1286419 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:24:24.074127 1286419 out.go:177] 
	W1128 04:24:24.076037 1286419 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1128 04:24:24.078016 1286419 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-789811 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-789811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-789811 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (344.027715ms)

                                                
                                                
-- stdout --
	* [functional-789811] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:24:23.570198 1286349 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:24:23.570417 1286349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:24:23.570429 1286349 out.go:309] Setting ErrFile to fd 2...
	I1128 04:24:23.570434 1286349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:24:23.572485 1286349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:24:23.572992 1286349 out.go:303] Setting JSON to false
	I1128 04:24:23.574010 1286349 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":25598,"bootTime":1701119865,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:24:23.574095 1286349 start.go:138] virtualization:  
	I1128 04:24:23.577823 1286349 out.go:177] * [functional-789811] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1128 04:24:23.579752 1286349 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:24:23.579724 1286349 notify.go:220] Checking for updates...
	I1128 04:24:23.581370 1286349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:24:23.583694 1286349 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:24:23.585495 1286349 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:24:23.587233 1286349 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:24:23.588966 1286349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:24:23.591246 1286349 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:24:23.591790 1286349 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:24:23.627397 1286349 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:24:23.627524 1286349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:24:23.755994 1286349 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-28 04:24:23.739634355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:24:23.756111 1286349 docker.go:295] overlay module found
	I1128 04:24:23.758620 1286349 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1128 04:24:23.760343 1286349 start.go:298] selected driver: docker
	I1128 04:24:23.760366 1286349 start.go:902] validating driver "docker" against &{Name:functional-789811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-789811 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:24:23.760467 1286349 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:24:23.766233 1286349 out.go:177] 
	W1128 04:24:23.773578 1286349 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1128 04:24:23.784244 1286349 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-789811 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-789811 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jls7v" [fadebec5-074d-4105-8107-9248784ba5fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-jls7v" [fadebec5-074d-4105-8107-9248784ba5fa] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.038310733s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30152
functional_test.go:1674: http://192.168.49.2:30152: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-jls7v

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30152
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9c7794fd-0684-4bb8-8570-6096d6a5eb55] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.030315841s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-789811 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-789811 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-789811 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-789811 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0bf36700-db24-413e-8ffa-db6dc051c6df] Pending
helpers_test.go:344: "sp-pod" [0bf36700-db24-413e-8ffa-db6dc051c6df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0bf36700-db24-413e-8ffa-db6dc051c6df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.019842203s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-789811 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-789811 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-789811 delete -f testdata/storage-provisioner/pod.yaml: (1.329396333s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-789811 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ac0e653a-9a5d-48b0-ad2e-b83c2a80df11] Pending
helpers_test.go:344: "sp-pod" [ac0e653a-9a5d-48b0-ad2e-b83c2a80df11] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.017587259s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-789811 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh -n functional-789811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 cp functional-789811:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3229228475/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh -n functional-789811 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1261415/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /etc/test/nested/copy/1261415/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1261415.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /etc/ssl/certs/1261415.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1261415.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /usr/share/ca-certificates/1261415.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12614152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /etc/ssl/certs/12614152.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12614152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /usr/share/ca-certificates/12614152.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-789811 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh "sudo systemctl is-active docker": exit status 1 (394.770551ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh "sudo systemctl is-active containerd": exit status 1 (500.733407ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-789811 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-789811 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-789811 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-789811 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1284581: os: process already finished
helpers_test.go:508: unable to kill pid 1284451: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-789811 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-789811 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a616bc58-84cf-4cc1-a9de-972c756a5f7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1128 04:23:56.522067 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [a616bc58-84cf-4cc1-a9de-972c756a5f7d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.017831795s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-789811 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.48.93 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-789811 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-789811 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-789811 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-8mt8v" [447363f8-d982-40ed-bd69-4eac7e85258a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-8mt8v" [447363f8-d982-40ed-bd69-4eac7e85258a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.015202133s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "364.854832ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "83.833174ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "356.680884ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "82.867222ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdany-port1193068637/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701145459068268986" to /tmp/TestFunctionalparallelMountCmdany-port1193068637/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701145459068268986" to /tmp/TestFunctionalparallelMountCmdany-port1193068637/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701145459068268986" to /tmp/TestFunctionalparallelMountCmdany-port1193068637/001/test-1701145459068268986
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.140542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 28 04:24 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 28 04:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 28 04:24 test-1701145459068268986
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh cat /mount-9p/test-1701145459068268986
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-789811 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2c7a034e-e624-4376-ac57-c92a24745a2e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2c7a034e-e624-4376-ac57-c92a24745a2e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2c7a034e-e624-4376-ac57-c92a24745a2e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.018149801s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-789811 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdany-port1193068637/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 service list -o json
functional_test.go:1493: Took "678.845115ms" to run "out/minikube-linux-arm64 -p functional-789811 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32188
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32188
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdspecific-port1685013480/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (460.147426ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdspecific-port1685013480/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh "sudo umount -f /mount-9p": exit status 1 (534.557486ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-789811 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdspecific-port1685013480/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2774457100/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2774457100/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2774457100/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T" /mount1: (1.358753176s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-789811 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2774457100/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2774457100/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-789811 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2774457100/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 version -o=json --components: (1.328707108s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-789811 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-789811
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-789811 image ls --format short --alsologtostderr:
I1128 04:24:54.473028 1289121 out.go:296] Setting OutFile to fd 1 ...
I1128 04:24:54.473255 1289121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:54.473284 1289121 out.go:309] Setting ErrFile to fd 2...
I1128 04:24:54.473308 1289121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:54.473723 1289121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
I1128 04:24:54.476797 1289121 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:54.477000 1289121 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:54.477752 1289121 cli_runner.go:164] Run: docker container inspect functional-789811 --format={{.State.Status}}
I1128 04:24:54.504082 1289121 ssh_runner.go:195] Run: systemctl --version
I1128 04:24:54.504135 1289121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-789811
I1128 04:24:54.523411 1289121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34334 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/functional-789811/id_rsa Username:docker}
I1128 04:24:54.619725 1289121 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-789811 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | alpine             | aae348c9fbd40 | 50.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | 9961cbceaf234 | 117MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | latest             | 5628e5ea3c17f | 196MB  |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 04b4c447bb9d4 | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 3ca3ca488cf13 | 70MB   |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| gcr.io/google-containers/addon-resizer  | functional-789811  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | 05c284c929889 | 59.3MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-789811 image ls --format table --alsologtostderr:
I1128 04:24:55.122099 1289253 out.go:296] Setting OutFile to fd 1 ...
I1128 04:24:55.122304 1289253 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:55.122330 1289253 out.go:309] Setting ErrFile to fd 2...
I1128 04:24:55.122349 1289253 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:55.122644 1289253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
I1128 04:24:55.123580 1289253 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:55.123804 1289253 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:55.124455 1289253 cli_runner.go:164] Run: docker container inspect functional-789811 --format={{.State.Status}}
I1128 04:24:55.147419 1289253 ssh_runner.go:195] Run: systemctl --version
I1128 04:24:55.147475 1289253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-789811
I1128 04:24:55.192945 1289253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34334 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/functional-789811/id_rsa Username:docker}
I1128 04:24:55.291922 1289253 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-789811 image ls --format json --alsologtostderr:
[{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-789811"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e"],"repoTags":
["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"117252916"},{"id":"aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50212152"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab"],"repoTags":["docker.io/library/nginx:latest"],"size":"196211465"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":
"87536549"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"59253556"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4
578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3
ee2c6d40682b473a743a72142c7bbe25767d36eb","registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"121119694"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"69992343"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io
/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@
sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-789811 image ls --format json --alsologtostderr:
I1128 04:24:54.794575 1289181 out.go:296] Setting OutFile to fd 1 ...
I1128 04:24:54.794760 1289181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:54.794773 1289181 out.go:309] Setting ErrFile to fd 2...
I1128 04:24:54.794780 1289181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:54.795070 1289181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
I1128 04:24:54.795816 1289181 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:54.796002 1289181 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:54.796747 1289181 cli_runner.go:164] Run: docker container inspect functional-789811 --format={{.State.Status}}
I1128 04:24:54.816414 1289181 ssh_runner.go:195] Run: systemctl --version
I1128 04:24:54.816475 1289181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-789811
I1128 04:24:54.839491 1289181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34334 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/functional-789811/id_rsa Username:docker}
I1128 04:24:54.950998 1289181 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-789811 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:736342e81e97220f954b8c33846ba80d2d95f59b30225a5c63d063c8b250b0ab
repoTags:
- docker.io/library/nginx:latest
size: "196211465"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
- registry.k8s.io/kube-apiserver@sha256:a4c3e6bec39f5dcb221a2f08266513ab19b7d977ccc76a0bcaf04d4935ac0fb2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "121119694"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:ddb0fb05335238789e9a847f0c6731e1da918c42a389625cb2a7ec577ca20afe
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "59253556"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "50212152"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-789811
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:fe49ea386d014cbf10cd16f53900b91bb7e7c32c5cf4fdcea550c33bedfdc92e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "117252916"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:460fb2711108dbfbe9ad77860d0fd8aad842c0e97f1aee757b33f2218daece68
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "69992343"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-789811 image ls --format yaml --alsologtostderr:
I1128 04:24:54.460850 1289120 out.go:296] Setting OutFile to fd 1 ...
I1128 04:24:54.461068 1289120 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:54.461080 1289120 out.go:309] Setting ErrFile to fd 2...
I1128 04:24:54.461128 1289120 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:54.461422 1289120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
I1128 04:24:54.462196 1289120 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:54.462380 1289120 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:54.462995 1289120 cli_runner.go:164] Run: docker container inspect functional-789811 --format={{.State.Status}}
I1128 04:24:54.492740 1289120 ssh_runner.go:195] Run: systemctl --version
I1128 04:24:54.492804 1289120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-789811
I1128 04:24:54.516386 1289120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34334 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/functional-789811/id_rsa Username:docker}
I1128 04:24:54.620507 1289120 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-789811 ssh pgrep buildkitd: exit status 1 (408.43211ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image build -t localhost/my-image:functional-789811 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 image build -t localhost/my-image:functional-789811 testdata/build --alsologtostderr: (2.33182367s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-789811 image build -t localhost/my-image:functional-789811 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 93ccb628f04
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-789811
--> 7e0f4cd52d5
Successfully tagged localhost/my-image:functional-789811
7e0f4cd52d5e28805db7ce937842289d865cc9baca0c2f2a91ea3a08732f2ca3
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-789811 image build -t localhost/my-image:functional-789811 testdata/build --alsologtostderr:
I1128 04:24:55.189137 1289260 out.go:296] Setting OutFile to fd 1 ...
I1128 04:24:55.190151 1289260 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:55.190169 1289260 out.go:309] Setting ErrFile to fd 2...
I1128 04:24:55.190176 1289260 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 04:24:55.190473 1289260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
I1128 04:24:55.191430 1289260 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:55.192014 1289260 config.go:182] Loaded profile config "functional-789811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 04:24:55.192624 1289260 cli_runner.go:164] Run: docker container inspect functional-789811 --format={{.State.Status}}
I1128 04:24:55.221208 1289260 ssh_runner.go:195] Run: systemctl --version
I1128 04:24:55.221307 1289260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-789811
I1128 04:24:55.243885 1289260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34334 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/functional-789811/id_rsa Username:docker}
I1128 04:24:55.349461 1289260 build_images.go:151] Building image from path: /tmp/build.716058768.tar
I1128 04:24:55.349570 1289260 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1128 04:24:55.363784 1289260 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.716058768.tar
I1128 04:24:55.373166 1289260 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.716058768.tar: stat -c "%s %y" /var/lib/minikube/build/build.716058768.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.716058768.tar': No such file or directory
I1128 04:24:55.373203 1289260 ssh_runner.go:362] scp /tmp/build.716058768.tar --> /var/lib/minikube/build/build.716058768.tar (3072 bytes)
I1128 04:24:55.407592 1289260 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.716058768
I1128 04:24:55.419147 1289260 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.716058768 -xf /var/lib/minikube/build/build.716058768.tar
I1128 04:24:55.430478 1289260 crio.go:297] Building image: /var/lib/minikube/build/build.716058768
I1128 04:24:55.430581 1289260 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-789811 /var/lib/minikube/build/build.716058768 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1128 04:24:57.392349 1289260 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-789811 /var/lib/minikube/build/build.716058768 --cgroup-manager=cgroupfs: (1.961738421s)
I1128 04:24:57.392418 1289260 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.716058768
I1128 04:24:57.404245 1289260 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.716058768.tar
I1128 04:24:57.415134 1289260 build_images.go:207] Built localhost/my-image:functional-789811 from /tmp/build.716058768.tar
I1128 04:24:57.415164 1289260 build_images.go:123] succeeded building to: functional-789811
I1128 04:24:57.415169 1289260 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.823422722s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-789811
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image load --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr
2023/11/28 04:24:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 image load --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr: (5.325124791s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image load --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 image load --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr: (2.955754128s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.541918134s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-789811
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image load --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 image load --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr: (3.640958635s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image save gcr.io/google-containers/addon-resizer:functional-789811 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image rm gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-789811 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.038048282s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-789811
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-789811 image save --daemon gcr.io/google-containers/addon-resizer:functional-789811 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-789811
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-789811
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-789811
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-789811
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-120112 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1128 04:26:12.676807 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-120112 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m37.542388093s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons enable ingress --alsologtostderr -v=5
E1128 04:26:40.363390 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons enable ingress --alsologtostderr -v=5: (12.569752485s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-120112 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-320302 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1128 04:30:14.954814 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:31:12.676567 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-320302 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m21.74814453s)
--- PASS: TestJSONOutput/start/Command (81.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-320302 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-320302 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-320302 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-320302 --output=json --user=testUser: (5.946877577s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-747213 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-747213 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.088775ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"31a3bdbb-49bd-49c8-9b17-8ba83b9a5247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-747213] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"36daad64-9ee6-4cee-a596-b68ce0d43459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17671"}}
	{"specversion":"1.0","id":"8aedc6d0-41d9-4b5f-adeb-e1c5b52e360d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d32d82a6-bece-49f5-b300-657dbbeac173","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig"}}
	{"specversion":"1.0","id":"9958d5d9-cf95-47bc-81ca-c65612896b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube"}}
	{"specversion":"1.0","id":"679ad50c-e1b5-4090-9bb1-b6dda981b7a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cf75e60f-6508-4ca5-8d6b-36f8d95a84de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9a6134e3-6178-4106-8666-195fdf7a5d08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-747213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-747213
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (45.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-873393 --network=
E1128 04:31:36.875522 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:31:51.642201 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:51.649231 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:51.660873 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:51.682534 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:51.723423 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:51.804153 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:51.964947 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:52.285834 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:52.926347 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:54.206561 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:31:56.766808 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:32:01.887676 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:32:12.128609 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-873393 --network=: (42.983916083s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-873393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-873393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-873393: (2.091539685s)
--- PASS: TestKicCustomNetwork/create_custom_network (45.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-175021 --network=bridge
E1128 04:32:32.608786 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-175021 --network=bridge: (33.419617329s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-175021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-175021
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-175021: (2.027127081s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.47s)

                                                
                                    
x
+
TestKicExistingNetwork (39.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-027436 --network=existing-network
E1128 04:33:13.569014 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-027436 --network=existing-network: (37.202646442s)
helpers_test.go:175: Cleaning up "existing-network-027436" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-027436
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-027436: (2.050393438s)
--- PASS: TestKicExistingNetwork (39.42s)

                                                
                                    
x
+
TestKicCustomSubnet (34s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-932326 --subnet=192.168.60.0/24
E1128 04:33:53.035201 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-932326 --subnet=192.168.60.0/24: (31.849489747s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-932326 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-932326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-932326
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-932326: (2.130931576s)
--- PASS: TestKicCustomSubnet (34.00s)

                                                
                                    
x
+
TestKicStaticIP (35.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-451228 --static-ip=192.168.200.200
E1128 04:34:20.715786 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 04:34:35.489232 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-451228 --static-ip=192.168.200.200: (32.827447012s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-451228 ip
helpers_test.go:175: Cleaning up "static-ip-451228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-451228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-451228: (2.152819568s)
--- PASS: TestKicStaticIP (35.18s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (75.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-575870 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-575870 --driver=docker  --container-runtime=crio: (33.463069934s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-578413 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-578413 --driver=docker  --container-runtime=crio: (36.580281697s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-575870
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-578413
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-578413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-578413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-578413: (2.038645694s)
helpers_test.go:175: Cleaning up "first-575870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-575870
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-575870: (2.020295289s)
--- PASS: TestMinikubeProfile (75.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-933428 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-933428 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.443444404s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-933428 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-935086 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-935086 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.7574037s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-935086 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-933428 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-933428 --alsologtostderr -v=5: (1.67787708s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-935086 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-935086
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-935086: (1.25011743s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-935086
E1128 04:36:12.676403 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-935086: (7.156304221s)
--- PASS: TestMountStart/serial/RestartStopped (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-935086 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-448128 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1128 04:36:51.638429 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:37:19.329774 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 04:37:35.723618 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-448128 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m4.716623281s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-448128 -- rollout status deployment/busybox: (3.362639089s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-9h4s8 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-cpvdq -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-9h4s8 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-cpvdq -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-9h4s8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-448128 -- exec busybox-5bc68d56bd-cpvdq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-448128 -v 3 --alsologtostderr
E1128 04:38:53.035274 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-448128 -v 3 --alsologtostderr: (50.766380219s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.49s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp testdata/cp-test.txt multinode-448128:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile828950957/001/cp-test_multinode-448128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128:/home/docker/cp-test.txt multinode-448128-m02:/home/docker/cp-test_multinode-448128_multinode-448128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m02 "sudo cat /home/docker/cp-test_multinode-448128_multinode-448128-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128:/home/docker/cp-test.txt multinode-448128-m03:/home/docker/cp-test_multinode-448128_multinode-448128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m03 "sudo cat /home/docker/cp-test_multinode-448128_multinode-448128-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp testdata/cp-test.txt multinode-448128-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile828950957/001/cp-test_multinode-448128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128-m02:/home/docker/cp-test.txt multinode-448128:/home/docker/cp-test_multinode-448128-m02_multinode-448128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128 "sudo cat /home/docker/cp-test_multinode-448128-m02_multinode-448128.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128-m02:/home/docker/cp-test.txt multinode-448128-m03:/home/docker/cp-test_multinode-448128-m02_multinode-448128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m03 "sudo cat /home/docker/cp-test_multinode-448128-m02_multinode-448128-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp testdata/cp-test.txt multinode-448128-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile828950957/001/cp-test_multinode-448128-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128-m03:/home/docker/cp-test.txt multinode-448128:/home/docker/cp-test_multinode-448128-m03_multinode-448128.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128 "sudo cat /home/docker/cp-test_multinode-448128-m03_multinode-448128.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 cp multinode-448128-m03:/home/docker/cp-test.txt multinode-448128-m02:/home/docker/cp-test_multinode-448128-m03_multinode-448128-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 ssh -n multinode-448128-m02 "sudo cat /home/docker/cp-test_multinode-448128-m03_multinode-448128-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-448128 node stop m03: (1.261126543s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-448128 status: exit status 7 (581.116028ms)

                                                
                                                
-- stdout --
	multinode-448128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-448128-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-448128-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr: exit status 7 (563.053088ms)

                                                
                                                
-- stdout --
	multinode-448128
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-448128-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-448128-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:39:43.320516 1336039 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:39:43.320745 1336039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:39:43.320759 1336039 out.go:309] Setting ErrFile to fd 2...
	I1128 04:39:43.320766 1336039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:39:43.321061 1336039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:39:43.321287 1336039 out.go:303] Setting JSON to false
	I1128 04:39:43.321373 1336039 mustload.go:65] Loading cluster: multinode-448128
	I1128 04:39:43.321445 1336039 notify.go:220] Checking for updates...
	I1128 04:39:43.321920 1336039 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:39:43.321934 1336039 status.go:255] checking status of multinode-448128 ...
	I1128 04:39:43.322495 1336039 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:39:43.343101 1336039 status.go:330] multinode-448128 host status = "Running" (err=<nil>)
	I1128 04:39:43.343143 1336039 host.go:66] Checking if "multinode-448128" exists ...
	I1128 04:39:43.343591 1336039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128
	I1128 04:39:43.361615 1336039 host.go:66] Checking if "multinode-448128" exists ...
	I1128 04:39:43.361950 1336039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:39:43.362074 1336039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128
	I1128 04:39:43.393913 1336039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34399 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128/id_rsa Username:docker}
	I1128 04:39:43.491559 1336039 ssh_runner.go:195] Run: systemctl --version
	I1128 04:39:43.497325 1336039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:39:43.511163 1336039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:39:43.586491 1336039 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-28 04:39:43.57683116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:39:43.587071 1336039 kubeconfig.go:92] found "multinode-448128" server: "https://192.168.58.2:8443"
	I1128 04:39:43.587125 1336039 api_server.go:166] Checking apiserver status ...
	I1128 04:39:43.587173 1336039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:39:43.599927 1336039 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1275/cgroup
	I1128 04:39:43.611261 1336039 api_server.go:182] apiserver freezer: "13:freezer:/docker/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/crio/crio-dc2c5388a3ecefb104841eaf473ad80babce019ac28952485ffeb35ba8cb38a2"
	I1128 04:39:43.611344 1336039 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/883175574ae59c822e2c3282897b4c03c497c821de3aa9d276d4929340f1f188/crio/crio-dc2c5388a3ecefb104841eaf473ad80babce019ac28952485ffeb35ba8cb38a2/freezer.state
	I1128 04:39:43.621895 1336039 api_server.go:204] freezer state: "THAWED"
	I1128 04:39:43.621922 1336039 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1128 04:39:43.630830 1336039 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1128 04:39:43.630864 1336039 status.go:421] multinode-448128 apiserver status = Running (err=<nil>)
	I1128 04:39:43.630882 1336039 status.go:257] multinode-448128 status: &{Name:multinode-448128 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1128 04:39:43.630898 1336039 status.go:255] checking status of multinode-448128-m02 ...
	I1128 04:39:43.631216 1336039 cli_runner.go:164] Run: docker container inspect multinode-448128-m02 --format={{.State.Status}}
	I1128 04:39:43.649141 1336039 status.go:330] multinode-448128-m02 host status = "Running" (err=<nil>)
	I1128 04:39:43.649165 1336039 host.go:66] Checking if "multinode-448128-m02" exists ...
	I1128 04:39:43.649475 1336039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-448128-m02
	I1128 04:39:43.667186 1336039 host.go:66] Checking if "multinode-448128-m02" exists ...
	I1128 04:39:43.667509 1336039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 04:39:43.667561 1336039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-448128-m02
	I1128 04:39:43.685733 1336039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34404 SSHKeyPath:/home/jenkins/minikube-integration/17671-1256059/.minikube/machines/multinode-448128-m02/id_rsa Username:docker}
	I1128 04:39:43.779022 1336039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:39:43.793186 1336039 status.go:257] multinode-448128-m02 status: &{Name:multinode-448128-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1128 04:39:43.793220 1336039 status.go:255] checking status of multinode-448128-m03 ...
	I1128 04:39:43.793547 1336039 cli_runner.go:164] Run: docker container inspect multinode-448128-m03 --format={{.State.Status}}
	I1128 04:39:43.812937 1336039 status.go:330] multinode-448128-m03 host status = "Stopped" (err=<nil>)
	I1128 04:39:43.812977 1336039 status.go:343] host is not running, skipping remaining checks
	I1128 04:39:43.812992 1336039 status.go:257] multinode-448128-m03 status: &{Name:multinode-448128-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-448128 node start m03 --alsologtostderr: (11.751431434s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-448128
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-448128
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-448128: (25.080614452s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-448128 --wait=true -v=8 --alsologtostderr
E1128 04:41:12.676624 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:41:51.638417 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-448128 --wait=true -v=8 --alsologtostderr: (1m35.342150885s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-448128
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-448128 node delete m03: (4.45088551s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-448128 stop: (23.950089408s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-448128 status: exit status 7 (106.241548ms)

                                                
                                                
-- stdout --
	multinode-448128
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-448128-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr: exit status 7 (104.93484ms)

                                                
                                                
-- stdout --
	multinode-448128
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-448128-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:42:26.472237 1344097 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:42:26.472478 1344097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:42:26.472510 1344097 out.go:309] Setting ErrFile to fd 2...
	I1128 04:42:26.472532 1344097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:42:26.472837 1344097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:42:26.473082 1344097 out.go:303] Setting JSON to false
	I1128 04:42:26.473189 1344097 mustload.go:65] Loading cluster: multinode-448128
	I1128 04:42:26.473232 1344097 notify.go:220] Checking for updates...
	I1128 04:42:26.473653 1344097 config.go:182] Loaded profile config "multinode-448128": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:42:26.473666 1344097 status.go:255] checking status of multinode-448128 ...
	I1128 04:42:26.474255 1344097 cli_runner.go:164] Run: docker container inspect multinode-448128 --format={{.State.Status}}
	I1128 04:42:26.493790 1344097 status.go:330] multinode-448128 host status = "Stopped" (err=<nil>)
	I1128 04:42:26.493814 1344097 status.go:343] host is not running, skipping remaining checks
	I1128 04:42:26.493822 1344097 status.go:257] multinode-448128 status: &{Name:multinode-448128 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1128 04:42:26.493867 1344097 status.go:255] checking status of multinode-448128-m02 ...
	I1128 04:42:26.494170 1344097 cli_runner.go:164] Run: docker container inspect multinode-448128-m02 --format={{.State.Status}}
	I1128 04:42:26.512172 1344097 status.go:330] multinode-448128-m02 host status = "Stopped" (err=<nil>)
	I1128 04:42:26.512194 1344097 status.go:343] host is not running, skipping remaining checks
	I1128 04:42:26.512207 1344097 status.go:257] multinode-448128-m02 status: &{Name:multinode-448128-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-448128 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-448128 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.041170555s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-448128 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-448128
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-448128-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-448128-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.551556ms)

                                                
                                                
-- stdout --
	* [multinode-448128-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-448128-m02' is duplicated with machine name 'multinode-448128-m02' in profile 'multinode-448128'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-448128-m03 --driver=docker  --container-runtime=crio
E1128 04:43:53.034635 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-448128-m03 --driver=docker  --container-runtime=crio: (33.840346646s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-448128
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-448128: exit status 80 (384.450507ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-448128
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-448128-m03 already exists in multinode-448128-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-448128-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-448128-m03: (2.055878149s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.47s)

                                                
                                    
x
+
TestPreload (163.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-592962 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1128 04:45:16.076799 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-592962 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.256806207s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-592962 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-592962 image pull gcr.io/k8s-minikube/busybox: (1.928455888s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-592962
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-592962: (5.922473669s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-592962 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1128 04:46:12.677340 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 04:46:51.638509 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-592962 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m8.482371152s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-592962 image list
helpers_test.go:175: Cleaning up "test-preload-592962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-592962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-592962: (2.460877366s)
--- PASS: TestPreload (163.33s)

                                                
                                    
x
+
TestScheduledStopUnix (107.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-020100 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-020100 --memory=2048 --driver=docker  --container-runtime=crio: (29.878192957s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-020100 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-020100 -n scheduled-stop-020100
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-020100 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-020100 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-020100 -n scheduled-stop-020100
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-020100
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-020100 --schedule 15s
E1128 04:48:14.690021 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1128 04:48:53.035595 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-020100
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-020100: exit status 7 (84.706313ms)

                                                
                                                
-- stdout --
	scheduled-stop-020100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-020100 -n scheduled-stop-020100
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-020100 -n scheduled-stop-020100: exit status 7 (84.623344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-020100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-020100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-020100: (5.178370942s)
--- PASS: TestScheduledStopUnix (107.02s)

                                                
                                    
x
+
TestInsufficientStorage (11.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-576814 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-576814 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.691140311s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b307812a-f0fb-431d-b196-dab64731d5df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-576814] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1c1a231-728e-4e18-9fa0-f8b587ebb1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17671"}}
	{"specversion":"1.0","id":"b33e6fca-e9d5-45b5-99d0-e1c44956c1f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"62f04bff-93a0-4f1c-8424-b9a874992170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig"}}
	{"specversion":"1.0","id":"1d61b129-ec7a-4c3d-9252-034f2c042aae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube"}}
	{"specversion":"1.0","id":"fd238b53-fc90-4188-a7ad-2ec2141514d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1de55222-1963-4388-be0b-d2df8d43ad07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5cc2a53e-ed21-4c18-a802-d56ac2789c38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"50e2dea1-464d-4d1c-a1ee-fd214236e5f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f24b799a-9e3e-439e-8d8a-f4ca0d177dd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcc01cd7-4183-4f5f-9d98-6f1fa5693843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"966575b1-47f2-4892-992b-ed1c8b0b5f4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-576814 in cluster insufficient-storage-576814","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17c2af8d-649e-4290-aa2e-a9a2d853b885","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fec35e95-dd13-44f3-b523-7b49ceabc78c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6195414-48c5-4cce-84cd-2af2a5e95b94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-576814 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-576814 --output=json --layout=cluster: exit status 7 (339.9937ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-576814","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-576814","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:49:09.832369 1360787 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-576814" does not appear in /home/jenkins/minikube-integration/17671-1256059/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-576814 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-576814 --output=json --layout=cluster: exit status 7 (347.967895ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-576814","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-576814","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:49:10.179546 1360841 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-576814" does not appear in /home/jenkins/minikube-integration/17671-1256059/kubeconfig
	E1128 04:49:10.192745 1360841 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/insufficient-storage-576814/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-576814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-576814
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-576814: (1.983498156s)
--- PASS: TestInsufficientStorage (11.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.260056659s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-541146
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-541146: (1.510342105s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-541146 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-541146 status --format={{.Host}}: exit status 7 (158.628817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m49.842914842s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-541146 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (102.761429ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-541146] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-541146
	    minikube start -p kubernetes-upgrade-541146 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5411462 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-541146 --kubernetes-version=v1.29.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-541146 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.654172892s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-541146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-541146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-541146: (2.310485618s)
--- PASS: TestKubernetesUpgrade (389.95s)

                                                
                                    
x
+
TestPause/serial/Start (94.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-143970 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-143970 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m34.472380514s)
--- PASS: TestPause/serial/Start (94.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-779758
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-779758: (1.066515965s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-831308 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-831308 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (116.37516ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-831308] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-831308 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-831308 --driver=docker  --container-runtime=crio: (32.010129504s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-831308 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-831308 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-831308 --no-kubernetes --driver=docker  --container-runtime=crio: (15.171700368s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-831308 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-831308 status -o json: exit status 2 (391.32885ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-831308","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-831308
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-831308: (2.194323821s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-831308 --no-kubernetes --driver=docker  --container-runtime=crio
E1128 04:56:12.677400 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-831308 --no-kubernetes --driver=docker  --container-runtime=crio: (8.866963875s)
--- PASS: TestNoKubernetes/serial/Start (8.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-831308 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-831308 "sudo systemctl is-active --quiet service kubelet": exit status 1 (344.916549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-831308
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-831308: (1.251294962s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-831308 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-831308 --driver=docker  --container-runtime=crio: (7.232193927s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-831308 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-831308 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.743304ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-502804 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-502804 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (216.014536ms)

                                                
                                                
-- stdout --
	* [false-502804] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 04:56:34.094355 1395979 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:56:34.094629 1395979 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:56:34.094645 1395979 out.go:309] Setting ErrFile to fd 2...
	I1128 04:56:34.094652 1395979 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:56:34.095494 1395979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-1256059/.minikube/bin
	I1128 04:56:34.096391 1395979 out.go:303] Setting JSON to false
	I1128 04:56:34.098278 1395979 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27529,"bootTime":1701119865,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1128 04:56:34.098395 1395979 start.go:138] virtualization:  
	I1128 04:56:34.101542 1395979 out.go:177] * [false-502804] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 04:56:34.103857 1395979 notify.go:220] Checking for updates...
	I1128 04:56:34.106213 1395979 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:56:34.108005 1395979 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:56:34.110029 1395979 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-1256059/kubeconfig
	I1128 04:56:34.111849 1395979 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-1256059/.minikube
	I1128 04:56:34.113583 1395979 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 04:56:34.115595 1395979 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:56:34.117876 1395979 config.go:182] Loaded profile config "kubernetes-upgrade-541146": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:56:34.117991 1395979 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:56:34.141981 1395979 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 04:56:34.142098 1395979 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 04:56:34.228356 1395979 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 04:56:34.21849788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 04:56:34.228471 1395979 docker.go:295] overlay module found
	I1128 04:56:34.230646 1395979 out.go:177] * Using the docker driver based on user configuration
	I1128 04:56:34.232451 1395979 start.go:298] selected driver: docker
	I1128 04:56:34.232469 1395979 start.go:902] validating driver "docker" against <nil>
	I1128 04:56:34.232483 1395979 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:56:34.234881 1395979 out.go:177] 
	W1128 04:56:34.236801 1395979 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1128 04:56:34.238697 1395979 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-502804 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-502804" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 04:52:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-541146
contexts:
- context:
cluster: kubernetes-upgrade-541146
user: kubernetes-upgrade-541146
name: kubernetes-upgrade-541146
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-541146
user:
client-certificate: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/client.crt
client-key: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-502804

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-502804"

                                                
                                                
----------------------- debugLogs end: false-502804 [took: 4.160181532s] --------------------------------
helpers_test.go:175: Cleaning up "false-502804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-502804
--- PASS: TestNetworkPlugins/group/false (4.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-533879 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-533879 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m7.733398636s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-792937 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 05:01:12.677273 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-792937 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (1m9.620052192s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-533879 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b74767f2-7479-4f25-a870-08aa8edbfa6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b74767f2-7479-4f25-a870-08aa8edbfa6c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.039295778s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-533879 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-533879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-533879 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.78927543s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-533879 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-533879 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-533879 --alsologtostderr -v=3: (12.251997871s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-533879 -n old-k8s-version-533879
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-533879 -n old-k8s-version-533879: exit status 7 (96.739987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-533879 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (439.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-533879 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1128 05:01:51.637578 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 05:01:56.077067 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-533879 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m18.884644093s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-533879 -n old-k8s-version-533879
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (439.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-792937 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd5ad0c3-77f3-41f2-9489-e429518ca0c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd5ad0c3-77f3-41f2-9489-e429518ca0c9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.035894529s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-792937 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-792937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-792937 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.296160939s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-792937 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-792937 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-792937 --alsologtostderr -v=3: (12.646790367s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-792937 -n no-preload-792937
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-792937 -n no-preload-792937: exit status 7 (118.761586ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-792937 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (354.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-792937 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 05:03:53.035997 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 05:04:54.690264 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 05:06:12.676438 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 05:06:51.638436 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-792937 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (5m53.670781867s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-792937 -n no-preload-792937
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (354.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xmdlz" [c1d3fa95-6f3e-49fd-96de-450f7302504f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xmdlz" [c1d3fa95-6f3e-49fd-96de-450f7302504f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.029086579s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xmdlz" [c1d3fa95-6f3e-49fd-96de-450f7302504f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010373117s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-792937 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-792937 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-792937 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-792937 -n no-preload-792937
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-792937 -n no-preload-792937: exit status 2 (369.818148ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-792937 -n no-preload-792937
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-792937 -n no-preload-792937: exit status 2 (361.549185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-792937 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-792937 -n no-preload-792937
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-792937 -n no-preload-792937
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-784116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 05:08:53.034808 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-784116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m23.157067231s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qnxnj" [a71d5160-88bb-46c6-85f2-75810f09c384] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.060233739s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qnxnj" [a71d5160-88bb-46c6-85f2-75810f09c384] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010086638s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-533879 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-533879 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-533879 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-533879 --alsologtostderr -v=1: (1.372537174s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-533879 -n old-k8s-version-533879
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-533879 -n old-k8s-version-533879: exit status 2 (455.945997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-533879 -n old-k8s-version-533879
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-533879 -n old-k8s-version-533879: exit status 2 (451.759721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-533879 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-533879 -n old-k8s-version-533879
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-533879 -n old-k8s-version-533879
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-078073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-078073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (1m24.466394617s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-784116 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2ccecc8-0354-4ca9-9a63-d89241a91b83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2ccecc8-0354-4ca9-9a63-d89241a91b83] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.030125426s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-784116 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-784116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-784116 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075985576s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-784116 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-784116 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-784116 --alsologtostderr -v=3: (12.168302624s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-784116 -n embed-certs-784116
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-784116 -n embed-certs-784116: exit status 7 (181.063916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-784116 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (623.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-784116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-784116 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (10m22.644114117s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-784116 -n embed-certs-784116
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (623.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-078073 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a04296c0-7bd7-466c-a049-64d941e62a34] Pending
helpers_test.go:344: "busybox" [a04296c0-7bd7-466c-a049-64d941e62a34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a04296c0-7bd7-466c-a049-64d941e62a34] Running
E1128 05:10:55.724029 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.025925154s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-078073 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-078073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-078073 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.584976127s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-078073 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-078073 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-078073 --alsologtostderr -v=3: (12.271944422s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073: exit status 7 (94.594588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-078073 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (350.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-078073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 05:11:12.676482 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 05:11:19.867908 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:19.873224 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:19.883886 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:19.904242 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:19.944716 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:20.025091 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:20.185474 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:20.506630 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:21.147080 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:22.427624 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:24.988069 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:30.108457 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:40.349036 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:11:51.638462 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 05:12:00.830082 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:12:05.892152 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:05.897463 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:05.907805 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:05.928149 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:05.968416 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:06.048832 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:06.209081 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:06.529637 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:07.170510 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:08.450913 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:11.011853 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:16.132089 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:26.372319 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:12:41.791139 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:12:46.853068 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:13:27.813623 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:13:53.035619 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
E1128 05:14:03.711653 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:14:49.733912 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
E1128 05:16:12.676971 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/addons-663058/client.crt: no such file or directory
E1128 05:16:19.867751 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:16:47.552347 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:16:51.637733 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-078073 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.4: (5m49.671421472s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (350.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vvjlw" [da72721c-7e84-44d8-bf4a-74c397cdeee1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1128 05:17:05.891767 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vvjlw" [da72721c-7e84-44d8-bf4a-74c397cdeee1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.059382416s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vvjlw" [da72721c-7e84-44d8-bf4a-74c397cdeee1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010886018s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-078073 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-078073 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-078073 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073: exit status 2 (363.352137ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073: exit status 2 (384.059129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-078073 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-078073 -n default-k8s-diff-port-078073
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-509447 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 05:17:33.575029 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-509447 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (42.554916763s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-509447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-509447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.260354872s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-509447 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-509447 --alsologtostderr -v=3: (1.329831855s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-509447 -n newest-cni-509447
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-509447 -n newest-cni-509447: exit status 7 (93.167517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-509447 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-509447 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 05:18:36.078027 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-509447 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (30.601547155s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-509447 -n newest-cni-509447
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-509447 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-509447 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-509447 -n newest-cni-509447
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-509447 -n newest-cni-509447: exit status 2 (559.129973ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-509447 -n newest-cni-509447
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-509447 -n newest-cni-509447: exit status 2 (382.309471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-509447 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-509447 -n newest-cni-509447
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-509447 -n newest-cni-509447
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1128 05:18:53.035074 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.80631201s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8vdls" [9b566e20-1c5b-4e31-a8e4-f026c76daf33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8vdls" [9b566e20-1c5b-4e31-a8e4-f026c76daf33] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.012362105s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1128 05:20:47.801263 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:47.806586 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:47.816835 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:47.837109 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:47.877374 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:47.957714 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:48.118141 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:48.438590 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:49.079608 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:50.360370 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:20:52.921352 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.413273502s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j6d2w" [efe86c64-3707-4a0c-af7c-424f865b0a22] Running
E1128 05:20:58.041558 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027205554s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-j6d2w" [efe86c64-3707-4a0c-af7c-424f865b0a22] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013026196s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-784116 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-784116 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-784116 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-784116 --alsologtostderr -v=1: (1.421744636s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-784116 -n embed-certs-784116
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-784116 -n embed-certs-784116: exit status 2 (574.916159ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-784116 -n embed-certs-784116
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-784116 -n embed-certs-784116: exit status 2 (479.75344ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-784116 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-784116 --alsologtostderr -v=1: (1.135823932s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-784116 -n embed-certs-784116
E1128 05:21:08.282691 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-784116 -n embed-certs-784116
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1128 05:21:19.867362 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/old-k8s-version-533879/client.crt: no such file or directory
E1128 05:21:28.763012 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:21:34.690900 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
E1128 05:21:51.638450 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m12.21932547s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-782f9" [25aef528-fac9-4e7d-9d13-ee07a8e12389] Running
E1128 05:22:05.892310 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/no-preload-792937/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.048535175s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mbjhs" [3dba2ded-c27f-4033-8b5b-46e4908dbbeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 05:22:09.723595 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-mbjhs" [3dba2ded-c27f-4033-8b5b-46e4908dbbeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.023504472s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-szfcl" [afe09d5c-975f-4e15-8049-63b12bafd43b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.045487625s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lbsqk" [d5d296bb-eb99-498f-91f9-74e55de036ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lbsqk" [d5d296bb-eb99-498f-91f9-74e55de036ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.014121238s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.928114765s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1128 05:23:31.644173 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
E1128 05:23:53.035180 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/functional-789811/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.619681264s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-78545" [a122552c-ff0c-427a-a9a6-eb1b02dbd729] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-78545" [a122552c-ff0c-427a-a9a6-eb1b02dbd729] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.017415609s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vnjmp" [21756b8f-e30d-4cd2-a105-ef1cb9d88253] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vnjmp" [21756b8f-e30d-4cd2-a105-ef1cb9d88253] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011451252s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.127476036s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1128 05:25:27.165902 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/auto-502804/client.crt: no such file or directory
E1128 05:25:47.646732 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/auto-502804/client.crt: no such file or directory
E1128 05:25:47.801257 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/default-k8s-diff-port-078073/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-502804 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m25.517367299s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7r2c7" [1b56a075-9f43-4551-ae1c-6628ded6e9f1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.034526208s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f6bhq" [f7e1efbd-33fe-4ad5-a21f-c2c8701158a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f6bhq" [f7e1efbd-33fe-4ad5-a21f-c2c8701158a9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.015210945s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-502804 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-502804 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kbhh2" [0cd004db-445c-495f-8c9b-e9b1f5133962] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kbhh2" [0cd004db-445c-495f-8c9b-e9b1f5133962] Running
E1128 05:26:51.637870 1261415 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/ingress-addon-legacy-120112/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011873331s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-502804 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-502804 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (32/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.68s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-010923 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-010923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-010923
--- SKIP: TestDownloadOnlyKic (0.68s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-664476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-664476
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-502804 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-502804" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 04:52:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-541146
contexts:
- context:
cluster: kubernetes-upgrade-541146
user: kubernetes-upgrade-541146
name: kubernetes-upgrade-541146
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-541146
user:
client-certificate: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/client.crt
client-key: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-502804

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-502804"

                                                
                                                
----------------------- debugLogs end: kubenet-502804 [took: 4.247614989s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-502804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-502804
--- SKIP: TestNetworkPlugins/group/kubenet (4.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-502804 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-502804" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17671-1256059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 Nov 2023 04:52:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-541146
contexts:
- context:
cluster: kubernetes-upgrade-541146
user: kubernetes-upgrade-541146
name: kubernetes-upgrade-541146
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-541146
user:
client-certificate: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/client.crt
client-key: /home/jenkins/minikube-integration/17671-1256059/.minikube/profiles/kubernetes-upgrade-541146/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-502804

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-502804" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-502804"

                                                
                                                
----------------------- debugLogs end: cilium-502804 [took: 4.949667678s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-502804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-502804
--- SKIP: TestNetworkPlugins/group/cilium (5.14s)

                                                
                                    
Copied to clipboard