Test Report: Docker_Linux_crio_arm64 17486

                    
                      90bfaeb6484f3951039c439350045b001b754599:2023-11-01:31693
                    
                

Test fail (8/308)

x
+
TestAddons/parallel/Ingress (167.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-864560 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-864560 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-864560 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2a3a9bb6-a49e-4692-9ec5-68a2735678ed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2a3a9bb6-a49e-4692-9ec5-68a2735678ed] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.012308535s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-864560 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.784953176s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-864560 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.056072679s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-864560 addons disable ingress --alsologtostderr -v=1: (7.805967352s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-864560
helpers_test.go:235: (dbg) docker inspect addons-864560:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5",
	        "Created": "2023-11-01T00:32:55.762079038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1203865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T00:32:56.0958996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bd2c3f7c992aecdf624ceae92825f3a10bf56bd552768efdb49aafbacd808193",
	        "ResolvConfPath": "/var/lib/docker/containers/6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5/hosts",
	        "LogPath": "/var/lib/docker/containers/6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5/6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5-json.log",
	        "Name": "/addons-864560",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-864560:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-864560",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/283bb750b14728c0bf047dbbb31c5b503ef9d6840d4451189e46940c2e20bff0-init/diff:/var/lib/docker/overlay2/d052914c945f7ab680be56190d2f2374e48b87c8da40d55e2692538d0bc19343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/283bb750b14728c0bf047dbbb31c5b503ef9d6840d4451189e46940c2e20bff0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/283bb750b14728c0bf047dbbb31c5b503ef9d6840d4451189e46940c2e20bff0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/283bb750b14728c0bf047dbbb31c5b503ef9d6840d4451189e46940c2e20bff0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-864560",
	                "Source": "/var/lib/docker/volumes/addons-864560/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-864560",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-864560",
	                "name.minikube.sigs.k8s.io": "addons-864560",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80a674d6028c0088a83c29f9d5f3e64633e4baa4f52a439bea1cd818e379a95f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34292"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34291"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34290"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34289"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/80a674d6028c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-864560": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6fe6e254754b",
	                        "addons-864560"
	                    ],
	                    "NetworkID": "cd9d32ef43e2ec12ace3d0616ab79b45e7d29b6dfae2c55cbd3173b0246b6b7c",
	                    "EndpointID": "1dc88454c6772cdd4713184393ecc16891fe69cf17f3ff3931cf0187f626f6c5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-864560 -n addons-864560
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-864560 logs -n 25: (1.596594975s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC | 01 Nov 23 00:32 UTC |
	| delete  | -p download-only-851884                                                                     | download-only-851884   | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC | 01 Nov 23 00:32 UTC |
	| delete  | -p download-only-851884                                                                     | download-only-851884   | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC | 01 Nov 23 00:32 UTC |
	| start   | --download-only -p                                                                          | download-docker-263246 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |                     |
	|         | download-docker-263246                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p download-docker-263246                                                                   | download-docker-263246 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC | 01 Nov 23 00:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-601750   | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |                     |
	|         | binary-mirror-601750                                                                        |                        |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |                |                     |                     |
	|         | http://127.0.0.1:46695                                                                      |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	| delete  | -p binary-mirror-601750                                                                     | binary-mirror-601750   | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC | 01 Nov 23 00:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |                     |
	|         | addons-864560                                                                               |                        |         |                |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |                     |
	|         | addons-864560                                                                               |                        |         |                |                     |                     |
	| start   | -p addons-864560 --wait=true                                                                | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC | 01 Nov 23 00:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |                |                     |                     |
	|         | --addons=registry                                                                           |                        |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |                |                     |                     |
	|         | --driver=docker                                                                             |                        |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |                |                     |                     |
	| ip      | addons-864560 ip                                                                            | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC | 01 Nov 23 00:35 UTC |
	| addons  | addons-864560 addons disable                                                                | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC | 01 Nov 23 00:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-864560 addons                                                                        | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC | 01 Nov 23 00:35 UTC |
	|         | disable metrics-server                                                                      |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC | 01 Nov 23 00:35 UTC |
	|         | addons-864560                                                                               |                        |         |                |                     |                     |
	| ssh     | addons-864560 ssh curl -s                                                                   | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |                |                     |                     |
	| addons  | addons-864560 addons                                                                        | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | addons-864560 addons                                                                        | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ssh     | addons-864560 ssh cat                                                                       | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | /opt/local-path-provisioner/pvc-88eb9be0-9144-4090-b7b4-bfa3bc5fed6f_default_test-pvc/file1 |                        |         |                |                     |                     |
	| addons  | addons-864560 addons disable                                                                | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | -p addons-864560                                                                            |                        |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | addons-864560                                                                               |                        |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:36 UTC | 01 Nov 23 00:36 UTC |
	|         | -p addons-864560                                                                            |                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |                |                     |                     |
	| ip      | addons-864560 ip                                                                            | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:37 UTC | 01 Nov 23 00:37 UTC |
	| addons  | addons-864560 addons disable                                                                | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:38 UTC | 01 Nov 23 00:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |                |                     |                     |
	|         | -v=1                                                                                        |                        |         |                |                     |                     |
	| addons  | addons-864560 addons disable                                                                | addons-864560          | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:38 UTC | 01 Nov 23 00:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:32:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:32:31.894276 1203401 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:32:31.894406 1203401 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:32:31.894416 1203401 out.go:309] Setting ErrFile to fd 2...
	I1101 00:32:31.894422 1203401 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:32:31.894682 1203401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:32:31.895106 1203401 out.go:303] Setting JSON to false
	I1101 00:32:31.896050 1203401 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29699,"bootTime":1698769053,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:32:31.896122 1203401 start.go:138] virtualization:  
	I1101 00:32:31.898760 1203401 out.go:177] * [addons-864560] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:32:31.901818 1203401 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:32:31.901951 1203401 notify.go:220] Checking for updates...
	I1101 00:32:31.905718 1203401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:32:31.907792 1203401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:32:31.909974 1203401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:32:31.912155 1203401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 00:32:31.914280 1203401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:32:31.916483 1203401 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:32:31.944351 1203401 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:32:31.944457 1203401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:32:32.025572 1203401 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-01 00:32:32.015866003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:32:32.025764 1203401 docker.go:295] overlay module found
	I1101 00:32:32.028054 1203401 out.go:177] * Using the docker driver based on user configuration
	I1101 00:32:32.030045 1203401 start.go:298] selected driver: docker
	I1101 00:32:32.030062 1203401 start.go:902] validating driver "docker" against <nil>
	I1101 00:32:32.030077 1203401 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:32:32.030723 1203401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:32:32.103603 1203401 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-01 00:32:32.094400348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:32:32.103761 1203401 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 00:32:32.103996 1203401 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:32:32.105920 1203401 out.go:177] * Using Docker driver with root privileges
	I1101 00:32:32.108106 1203401 cni.go:84] Creating CNI manager for ""
	I1101 00:32:32.108127 1203401 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:32:32.108138 1203401 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 00:32:32.108153 1203401 start_flags.go:323] config:
	{Name:addons-864560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-864560 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:32:32.110363 1203401 out.go:177] * Starting control plane node addons-864560 in cluster addons-864560
	I1101 00:32:32.111936 1203401 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 00:32:32.113670 1203401 out.go:177] * Pulling base image ...
	I1101 00:32:32.115441 1203401 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:32:32.115497 1203401 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1101 00:32:32.115509 1203401 cache.go:56] Caching tarball of preloaded images
	I1101 00:32:32.115534 1203401 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 00:32:32.115590 1203401 preload.go:174] Found /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 00:32:32.115600 1203401 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 00:32:32.115970 1203401 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/config.json ...
	I1101 00:32:32.115996 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/config.json: {Name:mk35e4b80246cadf84dd3261b14579d2075a8a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:32:32.132371 1203401 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 to local cache
	I1101 00:32:32.132516 1203401 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local cache directory
	I1101 00:32:32.132538 1203401 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local cache directory, skipping pull
	I1101 00:32:32.132544 1203401 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 exists in cache, skipping pull
	I1101 00:32:32.132556 1203401 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 as a tarball
	I1101 00:32:32.132565 1203401 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 from local cache
	I1101 00:32:47.569542 1203401 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 from cached tarball
	I1101 00:32:47.569601 1203401 cache.go:194] Successfully downloaded all kic artifacts
	I1101 00:32:47.569677 1203401 start.go:365] acquiring machines lock for addons-864560: {Name:mkfae7bafa9f5302c3de2c5661dd46740e8d7913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:32:47.569794 1203401 start.go:369] acquired machines lock for "addons-864560" in 93.349µs
	I1101 00:32:47.569822 1203401 start.go:93] Provisioning new machine with config: &{Name:addons-864560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-864560 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:32:47.569916 1203401 start.go:125] createHost starting for "" (driver="docker")
	I1101 00:32:47.572499 1203401 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1101 00:32:47.572743 1203401 start.go:159] libmachine.API.Create for "addons-864560" (driver="docker")
	I1101 00:32:47.572776 1203401 client.go:168] LocalClient.Create starting
	I1101 00:32:47.572876 1203401 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem
	I1101 00:32:48.374121 1203401 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem
	I1101 00:32:49.085589 1203401 cli_runner.go:164] Run: docker network inspect addons-864560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 00:32:49.102574 1203401 cli_runner.go:211] docker network inspect addons-864560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 00:32:49.102665 1203401 network_create.go:281] running [docker network inspect addons-864560] to gather additional debugging logs...
	I1101 00:32:49.102686 1203401 cli_runner.go:164] Run: docker network inspect addons-864560
	W1101 00:32:49.120404 1203401 cli_runner.go:211] docker network inspect addons-864560 returned with exit code 1
	I1101 00:32:49.120437 1203401 network_create.go:284] error running [docker network inspect addons-864560]: docker network inspect addons-864560: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-864560 not found
	I1101 00:32:49.120457 1203401 network_create.go:286] output of [docker network inspect addons-864560]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-864560 not found
	
	** /stderr **
	I1101 00:32:49.120558 1203401 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 00:32:49.137521 1203401 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002ea6130}
	I1101 00:32:49.137557 1203401 network_create.go:124] attempt to create docker network addons-864560 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 00:32:49.137620 1203401 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-864560 addons-864560
	I1101 00:32:49.206867 1203401 network_create.go:108] docker network addons-864560 192.168.49.0/24 created
	I1101 00:32:49.206895 1203401 kic.go:121] calculated static IP "192.168.49.2" for the "addons-864560" container
	I1101 00:32:49.206979 1203401 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 00:32:49.224170 1203401 cli_runner.go:164] Run: docker volume create addons-864560 --label name.minikube.sigs.k8s.io=addons-864560 --label created_by.minikube.sigs.k8s.io=true
	I1101 00:32:49.242758 1203401 oci.go:103] Successfully created a docker volume addons-864560
	I1101 00:32:49.242862 1203401 cli_runner.go:164] Run: docker run --rm --name addons-864560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864560 --entrypoint /usr/bin/test -v addons-864560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib
	I1101 00:32:51.355154 1203401 cli_runner.go:217] Completed: docker run --rm --name addons-864560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864560 --entrypoint /usr/bin/test -v addons-864560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib: (2.112254096s)
	I1101 00:32:51.355188 1203401 oci.go:107] Successfully prepared a docker volume addons-864560
	I1101 00:32:51.355226 1203401 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:32:51.355248 1203401 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 00:32:51.355327 1203401 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-864560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 00:32:55.681909 1203401 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-864560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir: (4.32653849s)
	I1101 00:32:55.681942 1203401 kic.go:203] duration metric: took 4.326690 seconds to extract preloaded images to volume
	W1101 00:32:55.682099 1203401 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 00:32:55.682216 1203401 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 00:32:55.746539 1203401 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-864560 --name addons-864560 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864560 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-864560 --network addons-864560 --ip 192.168.49.2 --volume addons-864560:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 00:32:56.104275 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Running}}
	I1101 00:32:56.137825 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:32:56.168932 1203401 cli_runner.go:164] Run: docker exec addons-864560 stat /var/lib/dpkg/alternatives/iptables
	I1101 00:32:56.237236 1203401 oci.go:144] the created container "addons-864560" has a running status.
	I1101 00:32:56.237261 1203401 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa...
	I1101 00:32:57.856660 1203401 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 00:32:57.876553 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:32:57.893281 1203401 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 00:32:57.893303 1203401 kic_runner.go:114] Args: [docker exec --privileged addons-864560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 00:32:57.975824 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:32:57.993079 1203401 machine.go:88] provisioning docker machine ...
	I1101 00:32:57.993109 1203401 ubuntu.go:169] provisioning hostname "addons-864560"
	I1101 00:32:57.993173 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:58.015858 1203401 main.go:141] libmachine: Using SSH client type: native
	I1101 00:32:58.016295 1203401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I1101 00:32:58.016314 1203401 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864560 && echo "addons-864560" | sudo tee /etc/hostname
	I1101 00:32:58.168002 1203401 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864560
	
	I1101 00:32:58.168128 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:58.186221 1203401 main.go:141] libmachine: Using SSH client type: native
	I1101 00:32:58.186684 1203401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I1101 00:32:58.186707 1203401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864560/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:32:58.326118 1203401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:32:58.326146 1203401 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 00:32:58.326170 1203401 ubuntu.go:177] setting up certificates
	I1101 00:32:58.326178 1203401 provision.go:83] configureAuth start
	I1101 00:32:58.326242 1203401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864560
	I1101 00:32:58.345005 1203401 provision.go:138] copyHostCerts
	I1101 00:32:58.345076 1203401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 00:32:58.345201 1203401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 00:32:58.345258 1203401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 00:32:58.345300 1203401 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.addons-864560 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-864560]
	I1101 00:32:58.549297 1203401 provision.go:172] copyRemoteCerts
	I1101 00:32:58.549364 1203401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:32:58.549408 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:58.572807 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:32:58.675448 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:32:58.704393 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1101 00:32:58.732828 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 00:32:58.760654 1203401 provision.go:86] duration metric: configureAuth took 434.456515ms
	I1101 00:32:58.760683 1203401 ubuntu.go:193] setting minikube options for container-runtime
	I1101 00:32:58.760874 1203401 config.go:182] Loaded profile config "addons-864560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:32:58.760978 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:58.778792 1203401 main.go:141] libmachine: Using SSH client type: native
	I1101 00:32:58.779224 1203401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I1101 00:32:58.779246 1203401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:32:59.032178 1203401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:32:59.032200 1203401 machine.go:91] provisioned docker machine in 1.039101018s
	I1101 00:32:59.032210 1203401 client.go:171] LocalClient.Create took 11.459424954s
	I1101 00:32:59.032222 1203401 start.go:167] duration metric: libmachine.API.Create for "addons-864560" took 11.459481241s
	I1101 00:32:59.032230 1203401 start.go:300] post-start starting for "addons-864560" (driver="docker")
	I1101 00:32:59.032239 1203401 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:32:59.032312 1203401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:32:59.032365 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:59.050632 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:32:59.151680 1203401 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:32:59.155707 1203401 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 00:32:59.155749 1203401 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 00:32:59.155764 1203401 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 00:32:59.155772 1203401 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 00:32:59.155782 1203401 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 00:32:59.155847 1203401 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 00:32:59.155876 1203401 start.go:303] post-start completed in 123.640914ms
	I1101 00:32:59.156183 1203401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864560
	I1101 00:32:59.173640 1203401 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/config.json ...
	I1101 00:32:59.173923 1203401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:32:59.173974 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:59.190755 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:32:59.286969 1203401 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 00:32:59.292342 1203401 start.go:128] duration metric: createHost completed in 11.722412308s
	I1101 00:32:59.292365 1203401 start.go:83] releasing machines lock for "addons-864560", held for 11.722559721s
	I1101 00:32:59.292441 1203401 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864560
	I1101 00:32:59.309465 1203401 ssh_runner.go:195] Run: cat /version.json
	I1101 00:32:59.309515 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:59.309535 1203401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:32:59.309593 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:32:59.329399 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:32:59.338429 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:32:59.564219 1203401 ssh_runner.go:195] Run: systemctl --version
	I1101 00:32:59.569488 1203401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:32:59.718982 1203401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:32:59.724342 1203401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:32:59.748099 1203401 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 00:32:59.748224 1203401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:32:59.787580 1203401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 00:32:59.787599 1203401 start.go:472] detecting cgroup driver to use...
	I1101 00:32:59.787630 1203401 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 00:32:59.787688 1203401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:32:59.805403 1203401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:32:59.819107 1203401 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:32:59.819172 1203401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:32:59.835356 1203401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:32:59.852616 1203401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:32:59.948833 1203401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:33:00.051926 1203401 docker.go:220] disabling docker service ...
	I1101 00:33:00.052060 1203401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:33:00.080135 1203401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:33:00.098731 1203401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:33:00.196061 1203401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:33:00.300401 1203401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:33:00.313934 1203401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:33:00.333983 1203401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 00:33:00.334079 1203401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:33:00.346828 1203401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:33:00.346904 1203401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:33:00.359720 1203401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:33:00.372057 1203401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:33:00.384315 1203401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:33:00.395612 1203401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:33:00.406763 1203401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:33:00.417164 1203401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:33:00.512486 1203401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:33:00.623857 1203401 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:33:00.623988 1203401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:33:00.628516 1203401 start.go:540] Will wait 60s for crictl version
	I1101 00:33:00.628588 1203401 ssh_runner.go:195] Run: which crictl
	I1101 00:33:00.633109 1203401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:33:00.680820 1203401 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 00:33:00.680992 1203401 ssh_runner.go:195] Run: crio --version
	I1101 00:33:00.726264 1203401 ssh_runner.go:195] Run: crio --version
	I1101 00:33:00.775286 1203401 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 00:33:00.777205 1203401 cli_runner.go:164] Run: docker network inspect addons-864560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 00:33:00.794769 1203401 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 00:33:00.799184 1203401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:33:00.811642 1203401 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:33:00.811710 1203401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:33:00.882345 1203401 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:33:00.882371 1203401 crio.go:415] Images already preloaded, skipping extraction
	I1101 00:33:00.882426 1203401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:33:00.924594 1203401 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 00:33:00.924616 1203401 cache_images.go:84] Images are preloaded, skipping loading
	I1101 00:33:00.924697 1203401 ssh_runner.go:195] Run: crio config
	I1101 00:33:00.984060 1203401 cni.go:84] Creating CNI manager for ""
	I1101 00:33:00.984084 1203401 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:33:00.984129 1203401 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:33:00.984156 1203401 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864560 NodeName:addons-864560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 00:33:00.984324 1203401 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-864560"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:33:00.984403 1203401 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-864560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-864560 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:33:00.984473 1203401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 00:33:00.994473 1203401 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:33:00.994540 1203401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:33:01.005139 1203401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1101 00:33:01.025555 1203401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 00:33:01.045945 1203401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1101 00:33:01.066123 1203401 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 00:33:01.070590 1203401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:33:01.083890 1203401 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560 for IP: 192.168.49.2
	I1101 00:33:01.083920 1203401 certs.go:190] acquiring lock for shared ca certs: {Name:mk19a54d78f5cf4996fdfc5da5ee5226ef1f844f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.084047 1203401 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key
	I1101 00:33:01.325289 1203401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt ...
	I1101 00:33:01.325321 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt: {Name:mk6b353c50536eb3428982f48e521fca4db41a5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.325516 1203401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key ...
	I1101 00:33:01.325529 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key: {Name:mkd3c147de6dde46417cedd2e992328c179d3357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.326125 1203401 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key
	I1101 00:33:01.731998 1203401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt ...
	I1101 00:33:01.732031 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt: {Name:mk045d25daaf83535ac0166320811159e434e61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.732220 1203401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key ...
	I1101 00:33:01.732234 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key: {Name:mk7c02372c111d5c8a5c588ac362bcca9e00e699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.732358 1203401 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.key
	I1101 00:33:01.732376 1203401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt with IP's: []
	I1101 00:33:01.979679 1203401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt ...
	I1101 00:33:01.979709 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: {Name:mkd8f65a55dd3d53bcb5837068e1739d40ac6143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.980815 1203401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.key ...
	I1101 00:33:01.980831 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.key: {Name:mkf8d4cc2e494eec5dd143a9588d7dc1ca3ac39c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:01.980918 1203401 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.key.dd3b5fb2
	I1101 00:33:01.980938 1203401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 00:33:02.472550 1203401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.crt.dd3b5fb2 ...
	I1101 00:33:02.472582 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.crt.dd3b5fb2: {Name:mk162ea96a9724a2acf57deaff8d6fc047a510ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:02.472775 1203401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.key.dd3b5fb2 ...
	I1101 00:33:02.472788 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.key.dd3b5fb2: {Name:mke3cb687f58493a52439c0360cab5297e6867f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:02.472870 1203401 certs.go:337] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.crt
	I1101 00:33:02.472944 1203401 certs.go:341] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.key
	I1101 00:33:02.473013 1203401 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.key
	I1101 00:33:02.473035 1203401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.crt with IP's: []
	I1101 00:33:02.741508 1203401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.crt ...
	I1101 00:33:02.741540 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.crt: {Name:mkee5b15a61e5cd1917d937e56690044eb369294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:02.742320 1203401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.key ...
	I1101 00:33:02.742337 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.key: {Name:mkf68c408a6c2f487b3bb6a450704620571eadf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:02.742577 1203401 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:33:02.742619 1203401 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:33:02.742649 1203401 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:33:02.742677 1203401 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem (1675 bytes)
	I1101 00:33:02.743369 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:33:02.773479 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 00:33:02.801684 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:33:02.831454 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:33:02.860761 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:33:02.889878 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:33:02.918242 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:33:02.946885 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:33:02.975468 1203401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:33:03.004392 1203401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:33:03.025914 1203401 ssh_runner.go:195] Run: openssl version
	I1101 00:33:03.033137 1203401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:33:03.044766 1203401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:33:03.049476 1203401 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:33:03.049551 1203401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:33:03.058528 1203401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:33:03.070631 1203401 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:33:03.075354 1203401 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:33:03.075428 1203401 kubeadm.go:404] StartCluster: {Name:addons-864560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-864560 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:33:03.075523 1203401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:33:03.075628 1203401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:33:03.129489 1203401 cri.go:89] found id: ""
	I1101 00:33:03.129583 1203401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:33:03.141847 1203401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:33:03.153060 1203401 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1101 00:33:03.153129 1203401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:33:03.163920 1203401 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:33:03.163965 1203401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 00:33:03.218299 1203401 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 00:33:03.218538 1203401 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 00:33:03.267815 1203401 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1101 00:33:03.267919 1203401 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1101 00:33:03.267976 1203401 kubeadm.go:322] OS: Linux
	I1101 00:33:03.268037 1203401 kubeadm.go:322] CGROUPS_CPU: enabled
	I1101 00:33:03.268103 1203401 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1101 00:33:03.268164 1203401 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1101 00:33:03.268228 1203401 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1101 00:33:03.268291 1203401 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1101 00:33:03.268355 1203401 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1101 00:33:03.268415 1203401 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1101 00:33:03.268474 1203401 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1101 00:33:03.268534 1203401 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1101 00:33:03.345411 1203401 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 00:33:03.345577 1203401 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 00:33:03.345707 1203401 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 00:33:03.596529 1203401 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:33:03.600572 1203401 out.go:204]   - Generating certificates and keys ...
	I1101 00:33:03.600680 1203401 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 00:33:03.600920 1203401 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 00:33:03.942021 1203401 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 00:33:04.648127 1203401 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 00:33:04.923089 1203401 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 00:33:05.130853 1203401 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 00:33:05.457418 1203401 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 00:33:05.457797 1203401 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-864560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 00:33:05.666844 1203401 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 00:33:05.667102 1203401 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-864560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 00:33:05.878133 1203401 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 00:33:06.163144 1203401 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 00:33:06.307834 1203401 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 00:33:06.308143 1203401 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:33:06.471612 1203401 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:33:06.967346 1203401 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:33:07.330882 1203401 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:33:07.471153 1203401 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:33:07.471826 1203401 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:33:07.475581 1203401 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:33:07.478684 1203401 out.go:204]   - Booting up control plane ...
	I1101 00:33:07.478795 1203401 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:33:07.478916 1203401 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:33:07.480102 1203401 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:33:07.491956 1203401 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:33:07.492057 1203401 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:33:07.492105 1203401 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 00:33:07.586501 1203401 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:33:14.089300 1203401 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502889 seconds
	I1101 00:33:14.089419 1203401 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:33:14.102311 1203401 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:33:14.630879 1203401 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:33:14.631063 1203401 kubeadm.go:322] [mark-control-plane] Marking the node addons-864560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 00:33:15.144742 1203401 kubeadm.go:322] [bootstrap-token] Using token: vdq9cd.jkw87xtzz3vbtdrf
	I1101 00:33:15.146569 1203401 out.go:204]   - Configuring RBAC rules ...
	I1101 00:33:15.146688 1203401 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:33:15.157588 1203401 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:33:15.167397 1203401 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:33:15.171319 1203401 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:33:15.178301 1203401 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:33:15.181992 1203401 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:33:15.195281 1203401 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:33:15.440234 1203401 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 00:33:15.565043 1203401 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 00:33:15.565060 1203401 kubeadm.go:322] 
	I1101 00:33:15.565116 1203401 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 00:33:15.565121 1203401 kubeadm.go:322] 
	I1101 00:33:15.565193 1203401 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 00:33:15.565198 1203401 kubeadm.go:322] 
	I1101 00:33:15.565222 1203401 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 00:33:15.565285 1203401 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:33:15.565334 1203401 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:33:15.565338 1203401 kubeadm.go:322] 
	I1101 00:33:15.565389 1203401 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 00:33:15.565399 1203401 kubeadm.go:322] 
	I1101 00:33:15.565443 1203401 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 00:33:15.565448 1203401 kubeadm.go:322] 
	I1101 00:33:15.565496 1203401 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 00:33:15.565573 1203401 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:33:15.565638 1203401 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:33:15.565642 1203401 kubeadm.go:322] 
	I1101 00:33:15.565720 1203401 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:33:15.565792 1203401 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 00:33:15.565796 1203401 kubeadm.go:322] 
	I1101 00:33:15.565874 1203401 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vdq9cd.jkw87xtzz3vbtdrf \
	I1101 00:33:15.565971 1203401 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 \
	I1101 00:33:15.565990 1203401 kubeadm.go:322] 	--control-plane 
	I1101 00:33:15.565996 1203401 kubeadm.go:322] 
	I1101 00:33:15.566074 1203401 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:33:15.566079 1203401 kubeadm.go:322] 
	I1101 00:33:15.566155 1203401 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vdq9cd.jkw87xtzz3vbtdrf \
	I1101 00:33:15.566250 1203401 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 
	I1101 00:33:15.567942 1203401 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1101 00:33:15.568051 1203401 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:33:15.568066 1203401 cni.go:84] Creating CNI manager for ""
	I1101 00:33:15.568074 1203401 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:33:15.570202 1203401 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:33:15.571908 1203401 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:33:15.582659 1203401 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 00:33:15.582679 1203401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:33:15.634035 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:33:16.493505 1203401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:33:16.493640 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:16.493713 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=addons-864560 minikube.k8s.io/updated_at=2023_11_01T00_33_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:16.522403 1203401 ops.go:34] apiserver oom_adj: -16
	I1101 00:33:16.701365 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:16.796307 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:17.386292 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:17.885605 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:18.386361 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:18.885983 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:19.386347 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:19.885584 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:20.385582 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:20.886374 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:21.385580 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:21.886056 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:22.385964 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:22.886308 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:23.386184 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:23.885722 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:24.386359 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:24.885757 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:25.386173 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:25.885752 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:26.385991 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:26.886173 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:27.386203 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:27.885522 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:28.385567 1203401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:33:28.589978 1203401 kubeadm.go:1081] duration metric: took 12.096380126s to wait for elevateKubeSystemPrivileges.
	I1101 00:33:28.590002 1203401 kubeadm.go:406] StartCluster complete in 25.514579056s
	I1101 00:33:28.590018 1203401 settings.go:142] acquiring lock: {Name:mke36bce3f316e572c27d9ade5690ad307116f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:28.590141 1203401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:33:28.590510 1203401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/kubeconfig: {Name:mk54047efde1577abb33547e94416477b8fd3071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:33:28.592586 1203401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:33:28.592812 1203401 config.go:182] Loaded profile config "addons-864560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:33:28.592839 1203401 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1101 00:33:28.592903 1203401 addons.go:69] Setting volumesnapshots=true in profile "addons-864560"
	I1101 00:33:28.592915 1203401 addons.go:231] Setting addon volumesnapshots=true in "addons-864560"
	I1101 00:33:28.592970 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.593512 1203401 addons.go:69] Setting ingress=true in profile "addons-864560"
	I1101 00:33:28.593528 1203401 addons.go:231] Setting addon ingress=true in "addons-864560"
	I1101 00:33:28.593565 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.594032 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.594600 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.594973 1203401 addons.go:69] Setting cloud-spanner=true in profile "addons-864560"
	I1101 00:33:28.595001 1203401 addons.go:231] Setting addon cloud-spanner=true in "addons-864560"
	I1101 00:33:28.595037 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.595435 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.595546 1203401 addons.go:69] Setting ingress-dns=true in profile "addons-864560"
	I1101 00:33:28.595561 1203401 addons.go:231] Setting addon ingress-dns=true in "addons-864560"
	I1101 00:33:28.595592 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.595946 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.598149 1203401 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864560"
	I1101 00:33:28.598199 1203401 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-864560"
	I1101 00:33:28.598242 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.598629 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.599852 1203401 addons.go:69] Setting inspektor-gadget=true in profile "addons-864560"
	I1101 00:33:28.599880 1203401 addons.go:231] Setting addon inspektor-gadget=true in "addons-864560"
	I1101 00:33:28.599921 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.600333 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.607247 1203401 addons.go:69] Setting default-storageclass=true in profile "addons-864560"
	I1101 00:33:28.607277 1203401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864560"
	I1101 00:33:28.607727 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.611389 1203401 addons.go:69] Setting metrics-server=true in profile "addons-864560"
	I1101 00:33:28.611421 1203401 addons.go:231] Setting addon metrics-server=true in "addons-864560"
	I1101 00:33:28.611469 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.617041 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.618021 1203401 addons.go:69] Setting gcp-auth=true in profile "addons-864560"
	I1101 00:33:28.618051 1203401 mustload.go:65] Loading cluster: addons-864560
	I1101 00:33:28.618277 1203401 config.go:182] Loaded profile config "addons-864560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:33:28.618533 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.629392 1203401 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864560"
	I1101 00:33:28.629427 1203401 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-864560"
	I1101 00:33:28.629482 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.629932 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.643438 1203401 addons.go:69] Setting registry=true in profile "addons-864560"
	I1101 00:33:28.643482 1203401 addons.go:231] Setting addon registry=true in "addons-864560"
	I1101 00:33:28.643541 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.644086 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.657969 1203401 addons.go:69] Setting storage-provisioner=true in profile "addons-864560"
	I1101 00:33:28.658023 1203401 addons.go:231] Setting addon storage-provisioner=true in "addons-864560"
	I1101 00:33:28.658094 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.658821 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.701949 1203401 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864560"
	I1101 00:33:28.701986 1203401 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864560"
	I1101 00:33:28.702326 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.811140 1203401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1101 00:33:28.822595 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 00:33:28.822567 1203401 addons.go:231] Setting addon default-storageclass=true in "addons-864560"
	I1101 00:33:28.830478 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 00:33:28.830485 1203401 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1101 00:33:28.830490 1203401 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1101 00:33:28.843461 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.852417 1203401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1101 00:33:28.857313 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.865654 1203401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:33:28.865760 1203401 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 00:33:28.872851 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 00:33:28.872866 1203401 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:33:28.872880 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:33:28.872926 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.872928 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.864213 1203401 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1101 00:33:28.873961 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 00:33:28.874050 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.864264 1203401 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 00:33:28.882405 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1101 00:33:28.882490 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.865613 1203401 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1101 00:33:28.895022 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1101 00:33:28.895043 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1101 00:33:28.895105 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.899589 1203401 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-864560"
	I1101 00:33:28.899627 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:28.900093 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.865619 1203401 out.go:177]   - Using image docker.io/registry:2.8.3
	I1101 00:33:28.938815 1203401 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1101 00:33:28.866400 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:28.916724 1203401 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-864560" context rescaled to 1 replicas
	I1101 00:33:28.940927 1203401 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1101 00:33:28.941044 1203401 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 00:33:28.950972 1203401 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1101 00:33:28.951001 1203401 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:33:28.957179 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 00:33:28.957199 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1101 00:33:28.961176 1203401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1101 00:33:28.965199 1203401 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 00:33:28.965268 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.973080 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:28.973248 1203401 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 00:33:28.981460 1203401 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 00:33:28.981484 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1101 00:33:28.981546 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.976675 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 00:33:28.989678 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:28.976686 1203401 out.go:177] * Verifying Kubernetes components...
	I1101 00:33:28.977572 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 00:33:28.998521 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:29.025690 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.032349 1203401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:33:29.032445 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 00:33:29.034356 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 00:33:29.036148 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 00:33:29.038108 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 00:33:29.040173 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 00:33:29.041766 1203401 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 00:33:29.043680 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 00:33:29.043698 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 00:33:29.043766 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:29.068340 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.082940 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.099456 1203401 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 00:33:29.103450 1203401 out.go:177]   - Using image docker.io/busybox:stable
	I1101 00:33:29.110683 1203401 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 00:33:29.110705 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 00:33:29.110784 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:29.125077 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.137288 1203401 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:33:29.137309 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:33:29.137378 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:29.171507 1203401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 00:33:29.190145 1203401 node_ready.go:35] waiting up to 6m0s for node "addons-864560" to be "Ready" ...
	I1101 00:33:29.200512 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.227529 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.252448 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.253922 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.255029 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.304411 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.313937 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:29.327837 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:33:29.419643 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 00:33:29.511515 1203401 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 00:33:29.511537 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 00:33:29.534790 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 00:33:29.551346 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1101 00:33:29.551370 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1101 00:33:29.660544 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 00:33:29.692833 1203401 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 00:33:29.692858 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 00:33:29.709397 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1101 00:33:29.709432 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1101 00:33:29.746696 1203401 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 00:33:29.746732 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 00:33:29.762854 1203401 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 00:33:29.762886 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 00:33:29.771559 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:33:29.790242 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 00:33:29.790269 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 00:33:29.810392 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 00:33:29.832269 1203401 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 00:33:29.832293 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 00:33:29.836168 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1101 00:33:29.836190 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1101 00:33:29.888713 1203401 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 00:33:29.888745 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 00:33:29.926624 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 00:33:29.949730 1203401 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 00:33:29.949754 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 00:33:29.986419 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1101 00:33:29.986455 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1101 00:33:30.006103 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 00:33:30.006131 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 00:33:30.064864 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 00:33:30.064891 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 00:33:30.090200 1203401 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:33:30.090228 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 00:33:30.134266 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 00:33:30.146910 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1101 00:33:30.146938 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1101 00:33:30.184739 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 00:33:30.184767 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 00:33:30.203390 1203401 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 00:33:30.203423 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 00:33:30.214420 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 00:33:30.294292 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 00:33:30.294318 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1101 00:33:30.372484 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 00:33:30.402313 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 00:33:30.402346 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 00:33:30.466869 1203401 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1101 00:33:30.466895 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1101 00:33:30.525089 1203401 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 00:33:30.525115 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 00:33:30.636390 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1101 00:33:30.666590 1203401 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 00:33:30.666657 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 00:33:30.791577 1203401 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 00:33:30.791638 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 00:33:30.889686 1203401 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 00:33:30.889755 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 00:33:31.056231 1203401 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 00:33:31.056256 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 00:33:31.137497 1203401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.965906796s)
	I1101 00:33:31.137528 1203401 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 00:33:31.243020 1203401 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 00:33:31.243044 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 00:33:31.292915 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:31.370883 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 00:33:33.736528 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:33.813144 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.485270571s)
	I1101 00:33:33.813243 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.393575698s)
	I1101 00:33:33.813315 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.278502041s)
	I1101 00:33:34.577210 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.916616955s)
	I1101 00:33:34.577253 1203401 addons.go:467] Verifying addon ingress=true in "addons-864560"
	I1101 00:33:34.579736 1203401 out.go:177] * Verifying ingress addon...
	I1101 00:33:34.577432 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.805846962s)
	I1101 00:33:34.577455 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.767040123s)
	I1101 00:33:34.577570 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.650921715s)
	I1101 00:33:34.577598 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.44330487s)
	I1101 00:33:34.577674 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.363232033s)
	I1101 00:33:34.577763 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.205250594s)
	I1101 00:33:34.577821 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.941358868s)
	I1101 00:33:34.583175 1203401 addons.go:467] Verifying addon registry=true in "addons-864560"
	I1101 00:33:34.584944 1203401 out.go:177] * Verifying registry addon...
	I1101 00:33:34.583590 1203401 addons.go:467] Verifying addon metrics-server=true in "addons-864560"
	W1101 00:33:34.583613 1203401 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 00:33:34.583568 1203401 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 00:33:34.588149 1203401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 00:33:34.588294 1203401 retry.go:31] will retry after 138.346522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 00:33:34.605239 1203401 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 00:33:34.605263 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:34.609800 1203401 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 00:33:34.609821 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 00:33:34.616774 1203401 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1101 00:33:34.617926 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:34.621239 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:34.727829 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 00:33:34.966376 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.595430424s)
	I1101 00:33:34.966414 1203401 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-864560"
	I1101 00:33:34.969731 1203401 out.go:177] * Verifying csi-hostpath-driver addon...
	I1101 00:33:34.972483 1203401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 00:33:34.994767 1203401 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 00:33:34.994794 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:35.014080 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:35.125550 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:35.127251 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:35.570886 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:35.632859 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:35.660941 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:36.020620 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:36.146619 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:36.146659 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:36.191621 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:36.203114 1203401 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 00:33:36.203215 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:36.233096 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:36.451220 1203401 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 00:33:36.474001 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.746119891s)
	I1101 00:33:36.494790 1203401 addons.go:231] Setting addon gcp-auth=true in "addons-864560"
	I1101 00:33:36.494885 1203401 host.go:66] Checking if "addons-864560" exists ...
	I1101 00:33:36.495368 1203401 cli_runner.go:164] Run: docker container inspect addons-864560 --format={{.State.Status}}
	I1101 00:33:36.527349 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:36.544181 1203401 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 00:33:36.544230 1203401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864560
	I1101 00:33:36.585991 1203401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/addons-864560/id_rsa Username:docker}
	I1101 00:33:36.622925 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:36.627712 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:36.736264 1203401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1101 00:33:36.738549 1203401 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1101 00:33:36.740570 1203401 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 00:33:36.740626 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 00:33:36.777735 1203401 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 00:33:36.777803 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 00:33:36.799718 1203401 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 00:33:36.799777 1203401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1101 00:33:36.837640 1203401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 00:33:37.037981 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:37.122025 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:37.125537 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:37.518471 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:37.625680 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:37.630477 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:37.872960 1203401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.035243135s)
	I1101 00:33:37.874522 1203401 addons.go:467] Verifying addon gcp-auth=true in "addons-864560"
	I1101 00:33:37.878154 1203401 out.go:177] * Verifying gcp-auth addon...
	I1101 00:33:37.880630 1203401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 00:33:37.894433 1203401 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 00:33:37.894492 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:37.907338 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:38.019622 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:38.123720 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:38.132463 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:38.412324 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:38.520780 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:38.622599 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:38.626284 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:38.662544 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:38.910973 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:39.019122 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:39.122660 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:39.126372 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:39.411133 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:39.520965 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:39.622775 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:39.634703 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:39.911120 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:40.019329 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:40.125229 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:40.127778 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:40.411892 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:40.531093 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:40.623027 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:40.626662 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:40.912973 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:41.019561 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:41.123401 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:41.126613 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:41.162836 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:41.410813 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:41.519101 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:41.622656 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:41.626188 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:41.910986 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:42.018616 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:42.122721 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:42.126377 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:42.411844 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:42.518979 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:42.622203 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:42.627009 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:42.911439 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:43.018942 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:43.122645 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:43.124575 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:43.162907 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:43.410993 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:43.519030 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:43.622502 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:43.624903 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:43.911649 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:44.019241 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:44.122349 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:44.125521 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:44.411083 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:44.519433 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:44.622941 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:44.625459 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:44.910883 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:45.019692 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:45.122765 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:45.125788 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:45.411522 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:45.518799 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:45.621887 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:45.625168 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:45.662436 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:45.911841 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:46.018978 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:46.122468 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:46.125887 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:46.411654 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:46.519999 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:46.622645 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:46.625981 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:46.910843 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:47.019990 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:47.122570 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:47.126458 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:47.411674 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:47.518627 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:47.622126 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:47.625702 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:47.662524 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:47.911587 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:48.018511 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:48.122009 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:48.125408 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:48.411069 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:48.518209 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:48.623105 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:48.625159 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:48.911187 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:49.018416 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:49.121833 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:49.125678 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:49.411272 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:49.519536 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:49.622809 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:49.626190 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:49.911821 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:50.019053 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:50.122695 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:50.126263 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:50.162406 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:50.411387 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:50.518620 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:50.622797 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:50.625993 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:50.911019 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:51.019032 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:51.122863 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:51.125934 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:51.411676 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:51.519946 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:51.623601 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:51.626759 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:51.911740 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:52.019056 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:52.122864 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:52.125686 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:52.411879 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:52.518976 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:52.622762 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:52.625382 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:52.662928 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:52.911326 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:53.018774 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:53.122774 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:53.125695 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:53.411428 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:53.519604 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:53.622865 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:53.626880 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:53.911898 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:54.019008 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:54.122509 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:54.127166 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:54.411080 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:54.519153 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:54.622243 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:54.625752 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:54.910912 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:55.019453 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:55.122816 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:55.126442 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:55.162532 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:55.411214 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:55.518418 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:55.622785 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:55.629840 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:55.911881 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:56.019206 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:56.122927 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:56.125322 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:56.411071 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:56.518731 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:56.626133 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:56.627252 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:56.911902 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:57.019030 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:57.122857 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:57.126538 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:57.163041 1203401 node_ready.go:58] node "addons-864560" has status "Ready":"False"
	I1101 00:33:57.411414 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:57.518584 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:57.622604 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:57.625091 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:57.911490 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:58.018954 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:58.122896 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:58.125002 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:58.411079 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:58.518318 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:58.621833 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:58.625313 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:58.911777 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:59.019218 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:59.122298 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:59.125666 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:59.415106 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:33:59.541302 1203401 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 00:33:59.541326 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:33:59.648781 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:33:59.652542 1203401 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 00:33:59.652566 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:33:59.662766 1203401 node_ready.go:49] node "addons-864560" has status "Ready":"True"
	I1101 00:33:59.662795 1203401 node_ready.go:38] duration metric: took 30.472578009s waiting for node "addons-864560" to be "Ready" ...
	I1101 00:33:59.662807 1203401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:33:59.676413 1203401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n8p8b" in "kube-system" namespace to be "Ready" ...
	I1101 00:33:59.911649 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:00.060939 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:00.125761 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:00.137061 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:00.413289 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:00.529554 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:00.625421 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:00.627438 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:00.915946 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:01.020372 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:01.122835 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:01.127675 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:01.211807 1203401 pod_ready.go:92] pod "coredns-5dd5756b68-n8p8b" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:01.211888 1203401 pod_ready.go:81] duration metric: took 1.535437879s waiting for pod "coredns-5dd5756b68-n8p8b" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.211945 1203401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.218447 1203401 pod_ready.go:92] pod "etcd-addons-864560" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:01.218515 1203401 pod_ready.go:81] duration metric: took 6.536289ms waiting for pod "etcd-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.218543 1203401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.225915 1203401 pod_ready.go:92] pod "kube-apiserver-addons-864560" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:01.225982 1203401 pod_ready.go:81] duration metric: took 7.416972ms waiting for pod "kube-apiserver-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.226009 1203401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.264103 1203401 pod_ready.go:92] pod "kube-controller-manager-addons-864560" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:01.264123 1203401 pod_ready.go:81] duration metric: took 38.094049ms waiting for pod "kube-controller-manager-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.264137 1203401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ffrlw" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.411135 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:01.523934 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:01.625382 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:01.631761 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:01.665258 1203401 pod_ready.go:92] pod "kube-proxy-ffrlw" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:01.665328 1203401 pod_ready.go:81] duration metric: took 401.183387ms waiting for pod "kube-proxy-ffrlw" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.665355 1203401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:01.911104 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:02.020538 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:02.063931 1203401 pod_ready.go:92] pod "kube-scheduler-addons-864560" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:02.063953 1203401 pod_ready.go:81] duration metric: took 398.578204ms waiting for pod "kube-scheduler-addons-864560" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:02.063964 1203401 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-25rbs" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:02.132492 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:02.132701 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:02.411686 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:02.524859 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:02.622520 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:02.627113 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:02.917756 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:03.021901 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:03.129838 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:03.137771 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:03.411927 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:03.524959 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:03.625304 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:03.629689 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:03.912580 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:04.023631 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:04.125621 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:04.129236 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:04.372359 1203401 pod_ready.go:102] pod "metrics-server-7c66d45ddc-25rbs" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:04.420580 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:04.526609 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:04.624609 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:04.632319 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:04.915859 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:05.021550 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:05.129834 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:05.134210 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:05.413780 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:05.521939 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:05.633859 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:05.650507 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:05.916333 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:06.022926 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:06.123134 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:06.129372 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:06.411520 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:06.520178 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:06.622759 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:06.627395 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:06.875192 1203401 pod_ready.go:102] pod "metrics-server-7c66d45ddc-25rbs" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:06.912220 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:07.020885 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:07.124595 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:07.130188 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:07.412686 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:07.520668 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:07.623391 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:07.627642 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:07.913372 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:08.023517 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:08.123171 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:08.126225 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:08.412786 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:08.520526 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:08.623389 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:08.626199 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:08.910945 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:09.022762 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:09.122614 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:09.127133 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:09.371129 1203401 pod_ready.go:102] pod "metrics-server-7c66d45ddc-25rbs" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:09.411681 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:09.530986 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:09.622593 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:09.627730 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:09.908367 1203401 pod_ready.go:92] pod "metrics-server-7c66d45ddc-25rbs" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:09.908442 1203401 pod_ready.go:81] duration metric: took 7.844470084s waiting for pod "metrics-server-7c66d45ddc-25rbs" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:09.908467 1203401 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:09.942979 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:10.021547 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:10.123633 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:10.126529 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:10.416073 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:10.520544 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:10.623184 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:10.626004 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:10.912092 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:11.020541 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:11.123224 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:11.126022 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:11.413692 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:11.532330 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:11.623743 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:11.634068 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:11.911971 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:12.004970 1203401 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:12.021658 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:12.124120 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:12.129485 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:12.412857 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:12.533458 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:12.628602 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:12.637046 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:12.912780 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:13.032644 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:13.123570 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:13.127963 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:13.412656 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:13.526107 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:13.623085 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:13.628966 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:13.912279 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:14.005168 1203401 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:14.021075 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:14.124421 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:14.126576 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:14.411645 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:14.525741 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:14.623114 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:14.631355 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:14.913920 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:15.040825 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:15.124887 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:15.130594 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:15.411769 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:15.539233 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:15.622835 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:15.628128 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:15.911683 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:16.005636 1203401 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:16.023523 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:16.125118 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:16.129991 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:16.412725 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:16.524044 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:16.622910 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:16.625560 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:16.911582 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:17.020451 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:17.123802 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:17.136370 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:17.411377 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:17.526441 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:17.623050 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:17.627048 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:17.911003 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:18.005996 1203401 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:18.019314 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:18.122859 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:18.126097 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:18.411516 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:18.533105 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:18.623052 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:18.626477 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:18.911438 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:19.020820 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:19.124711 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:19.128399 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:19.411531 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:19.520778 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:19.626226 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:19.638922 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:19.911655 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:20.007308 1203401 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:20.020526 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:20.127453 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:20.131641 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:20.412134 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:20.537513 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:20.623433 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:20.628481 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:20.911712 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:21.021390 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:21.136482 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:21.137289 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:21.411179 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:21.521306 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:21.629969 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:21.631399 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:21.911397 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:22.008193 1203401 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"False"
	I1101 00:34:22.021449 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:22.122812 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:22.125936 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:22.411084 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:22.504453 1203401 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace has status "Ready":"True"
	I1101 00:34:22.504525 1203401 pod_ready.go:81] duration metric: took 12.596036098s waiting for pod "nvidia-device-plugin-daemonset-jttg2" in "kube-system" namespace to be "Ready" ...
	I1101 00:34:22.504561 1203401 pod_ready.go:38] duration metric: took 22.841741824s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:34:22.504600 1203401 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:34:22.504677 1203401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:34:22.519639 1203401 api_server.go:72] duration metric: took 53.558430664s to wait for apiserver process to appear ...
	I1101 00:34:22.519710 1203401 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:34:22.519742 1203401 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 00:34:22.522149 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:22.529594 1203401 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 00:34:22.530903 1203401 api_server.go:141] control plane version: v1.28.3
	I1101 00:34:22.530930 1203401 api_server.go:131] duration metric: took 11.198902ms to wait for apiserver health ...
	I1101 00:34:22.530940 1203401 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:34:22.541221 1203401 system_pods.go:59] 18 kube-system pods found
	I1101 00:34:22.541257 1203401 system_pods.go:61] "coredns-5dd5756b68-n8p8b" [6b5cb7df-73ff-4302-bfe3-fa7a5b7c3c33] Running
	I1101 00:34:22.541268 1203401 system_pods.go:61] "csi-hostpath-attacher-0" [a37ca84e-ff83-4de5-a55f-a5c43040b6e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 00:34:22.541275 1203401 system_pods.go:61] "csi-hostpath-resizer-0" [185d5079-1477-4410-95b7-e66cc5d6a804] Running
	I1101 00:34:22.541283 1203401 system_pods.go:61] "csi-hostpathplugin-lzlw2" [0f542675-c59e-48f1-9d0c-9971d798c20e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 00:34:22.541290 1203401 system_pods.go:61] "etcd-addons-864560" [c4dd170d-abcd-4b8f-89f2-969d85a88954] Running
	I1101 00:34:22.541297 1203401 system_pods.go:61] "kindnet-sx7k4" [8eb8c465-3cfa-47ca-a206-8525afeb51d0] Running
	I1101 00:34:22.541301 1203401 system_pods.go:61] "kube-apiserver-addons-864560" [8486fefc-b952-4f4a-bebe-0740f1bf3b07] Running
	I1101 00:34:22.541307 1203401 system_pods.go:61] "kube-controller-manager-addons-864560" [ea744ab3-9c3d-40a8-90ac-1490ebbc3d86] Running
	I1101 00:34:22.541322 1203401 system_pods.go:61] "kube-ingress-dns-minikube" [77596e43-4daf-4ea6-88fe-a52d31f29ad6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 00:34:22.541328 1203401 system_pods.go:61] "kube-proxy-ffrlw" [e88353f3-aece-4963-bca7-6c77ea807459] Running
	I1101 00:34:22.541339 1203401 system_pods.go:61] "kube-scheduler-addons-864560" [c12864e0-d259-4e0f-8bad-d4cdeabe84c1] Running
	I1101 00:34:22.541344 1203401 system_pods.go:61] "metrics-server-7c66d45ddc-25rbs" [89137386-d1fd-406f-8465-066e80796edc] Running
	I1101 00:34:22.541351 1203401 system_pods.go:61] "nvidia-device-plugin-daemonset-jttg2" [de1942f3-46cd-42dc-a069-706b386eefc0] Running
	I1101 00:34:22.541361 1203401 system_pods.go:61] "registry-proxy-p9xzm" [2afe95c8-ac6a-4431-8018-5c27cd0852dd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 00:34:22.541369 1203401 system_pods.go:61] "registry-pscsg" [0de356cb-166c-4eeb-b9a8-cbd31f74f4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 00:34:22.541375 1203401 system_pods.go:61] "snapshot-controller-58dbcc7b99-9jlzg" [44385f0b-2edb-4d6c-b985-5ad7128f5dac] Running
	I1101 00:34:22.541385 1203401 system_pods.go:61] "snapshot-controller-58dbcc7b99-t724w" [3b001677-be66-423d-ae1c-3c52c41bb804] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 00:34:22.541394 1203401 system_pods.go:61] "storage-provisioner" [58b9e427-6900-43a0-9c3e-f67f92fc9b5d] Running
	I1101 00:34:22.541400 1203401 system_pods.go:74] duration metric: took 10.434492ms to wait for pod list to return data ...
	I1101 00:34:22.541412 1203401 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:34:22.543933 1203401 default_sa.go:45] found service account: "default"
	I1101 00:34:22.543993 1203401 default_sa.go:55] duration metric: took 2.570722ms for default service account to be created ...
	I1101 00:34:22.544009 1203401 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:34:22.554001 1203401 system_pods.go:86] 18 kube-system pods found
	I1101 00:34:22.554035 1203401 system_pods.go:89] "coredns-5dd5756b68-n8p8b" [6b5cb7df-73ff-4302-bfe3-fa7a5b7c3c33] Running
	I1101 00:34:22.554046 1203401 system_pods.go:89] "csi-hostpath-attacher-0" [a37ca84e-ff83-4de5-a55f-a5c43040b6e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 00:34:22.554054 1203401 system_pods.go:89] "csi-hostpath-resizer-0" [185d5079-1477-4410-95b7-e66cc5d6a804] Running
	I1101 00:34:22.554063 1203401 system_pods.go:89] "csi-hostpathplugin-lzlw2" [0f542675-c59e-48f1-9d0c-9971d798c20e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 00:34:22.554074 1203401 system_pods.go:89] "etcd-addons-864560" [c4dd170d-abcd-4b8f-89f2-969d85a88954] Running
	I1101 00:34:22.554080 1203401 system_pods.go:89] "kindnet-sx7k4" [8eb8c465-3cfa-47ca-a206-8525afeb51d0] Running
	I1101 00:34:22.554088 1203401 system_pods.go:89] "kube-apiserver-addons-864560" [8486fefc-b952-4f4a-bebe-0740f1bf3b07] Running
	I1101 00:34:22.554093 1203401 system_pods.go:89] "kube-controller-manager-addons-864560" [ea744ab3-9c3d-40a8-90ac-1490ebbc3d86] Running
	I1101 00:34:22.554101 1203401 system_pods.go:89] "kube-ingress-dns-minikube" [77596e43-4daf-4ea6-88fe-a52d31f29ad6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 00:34:22.554107 1203401 system_pods.go:89] "kube-proxy-ffrlw" [e88353f3-aece-4963-bca7-6c77ea807459] Running
	I1101 00:34:22.554112 1203401 system_pods.go:89] "kube-scheduler-addons-864560" [c12864e0-d259-4e0f-8bad-d4cdeabe84c1] Running
	I1101 00:34:22.554120 1203401 system_pods.go:89] "metrics-server-7c66d45ddc-25rbs" [89137386-d1fd-406f-8465-066e80796edc] Running
	I1101 00:34:22.554131 1203401 system_pods.go:89] "nvidia-device-plugin-daemonset-jttg2" [de1942f3-46cd-42dc-a069-706b386eefc0] Running
	I1101 00:34:22.554138 1203401 system_pods.go:89] "registry-proxy-p9xzm" [2afe95c8-ac6a-4431-8018-5c27cd0852dd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 00:34:22.554175 1203401 system_pods.go:89] "registry-pscsg" [0de356cb-166c-4eeb-b9a8-cbd31f74f4bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 00:34:22.554186 1203401 system_pods.go:89] "snapshot-controller-58dbcc7b99-9jlzg" [44385f0b-2edb-4d6c-b985-5ad7128f5dac] Running
	I1101 00:34:22.554194 1203401 system_pods.go:89] "snapshot-controller-58dbcc7b99-t724w" [3b001677-be66-423d-ae1c-3c52c41bb804] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 00:34:22.554199 1203401 system_pods.go:89] "storage-provisioner" [58b9e427-6900-43a0-9c3e-f67f92fc9b5d] Running
	I1101 00:34:22.554206 1203401 system_pods.go:126] duration metric: took 10.190958ms to wait for k8s-apps to be running ...
	I1101 00:34:22.554216 1203401 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:34:22.554273 1203401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:34:22.568846 1203401 system_svc.go:56] duration metric: took 14.619701ms WaitForService to wait for kubelet.
	I1101 00:34:22.568876 1203401 kubeadm.go:581] duration metric: took 53.607673391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:34:22.568897 1203401 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:34:22.571936 1203401 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 00:34:22.571964 1203401 node_conditions.go:123] node cpu capacity is 2
	I1101 00:34:22.571978 1203401 node_conditions.go:105] duration metric: took 3.074612ms to run NodePressure ...
	I1101 00:34:22.571991 1203401 start.go:228] waiting for startup goroutines ...
	I1101 00:34:22.623127 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:22.632067 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:22.912971 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:23.020965 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:23.125489 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:23.131210 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:23.411313 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:23.524661 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:23.650752 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:23.668425 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:23.912907 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:24.025631 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:24.135487 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:24.156706 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:24.449271 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:24.535323 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:24.623285 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:24.626230 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:24.911624 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:25.020143 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:25.125038 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:25.126950 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:25.412388 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:25.521281 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:25.624453 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:25.631961 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:25.911460 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:26.020708 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:26.126325 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:26.130681 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:26.411636 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:26.521425 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:26.632364 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:26.633371 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:26.911623 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:27.020445 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:27.123205 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:27.125914 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:27.412303 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:27.529648 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:27.624747 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:27.631529 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:27.913772 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:28.020975 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:28.124201 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:28.128977 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:28.411938 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:28.521864 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:28.622906 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:28.625829 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:28.910929 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:29.019956 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:29.122860 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:29.126774 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:29.410924 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:29.520831 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:29.622336 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:29.626416 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:29.911564 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:30.020306 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:30.124300 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:30.131318 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:30.412125 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:30.522830 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:30.629401 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:30.632737 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:30.920399 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:31.022138 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:31.127576 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:31.129008 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:31.412482 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:31.520868 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:31.623238 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:31.631201 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:31.912179 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:32.021975 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:32.124632 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:32.132253 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:32.412682 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:32.522434 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:32.625400 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:32.627073 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:32.911812 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:33.024280 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:33.123952 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:33.127213 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:33.411406 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:33.520831 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:33.622459 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:33.632862 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:33.912282 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:34.021238 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:34.127275 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:34.130811 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:34.411897 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:34.521516 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:34.623540 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:34.627737 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:34.913783 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:35.021251 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:35.122745 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:35.125680 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:35.410946 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:35.520216 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:35.622398 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:35.625838 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:35.911573 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:36.020634 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:36.124423 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:36.126286 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:36.411583 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:36.523305 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:36.623051 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:36.627786 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:36.912477 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:37.020808 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:37.123541 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:37.129502 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:37.412721 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:37.521125 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:37.632854 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:37.633745 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:37.912242 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:38.022461 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:38.128037 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:38.131264 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:38.412998 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:38.521624 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:38.623015 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:38.633900 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:38.911896 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:39.021325 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:39.124043 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:39.134841 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:39.410860 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:39.520820 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:39.629787 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:39.631476 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:39.915048 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:40.020667 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:40.123655 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:40.127366 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 00:34:40.411175 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:40.520240 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:40.622622 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:40.627610 1203401 kapi.go:107] duration metric: took 1m6.039457906s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 00:34:40.912580 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:41.020553 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:41.122492 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:41.411429 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:41.520070 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:41.623000 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:41.911923 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:42.021875 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:42.122633 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:42.411338 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:42.521197 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:42.633639 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:42.910846 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:43.020917 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:43.122585 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:43.412175 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:43.523190 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:43.622617 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:43.912432 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:44.021588 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:44.123453 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:44.411448 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:44.523982 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:44.623398 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:44.911467 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:45.022926 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:45.122640 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:45.412389 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:45.521236 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:45.622982 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:45.912431 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:46.020367 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:46.122620 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:46.412479 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:46.522918 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:46.622507 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:46.912565 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:47.021036 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:47.122850 1203401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:34:47.412267 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:47.523216 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:47.623575 1203401 kapi.go:107] duration metric: took 1m13.040001585s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 00:34:47.911409 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:48.021320 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:48.412366 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:48.521581 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:48.919202 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:49.025319 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:49.410887 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 00:34:49.524377 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:49.911854 1203401 kapi.go:107] duration metric: took 1m12.031221553s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 00:34:49.914056 1203401 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-864560 cluster.
	I1101 00:34:49.915876 1203401 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 00:34:49.917665 1203401 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 00:34:50.020101 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:50.520512 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:51.020576 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:51.529084 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:52.020582 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:52.539576 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:53.023072 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:53.523089 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:54.020260 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:54.523914 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:55.020310 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:55.523131 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:56.020080 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:56.522273 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:57.021094 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:57.520183 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:58.019997 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:58.524811 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:59.027044 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:34:59.521246 1203401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 00:35:00.023483 1203401 kapi.go:107] duration metric: took 1m25.050992229s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 00:35:00.029184 1203401 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, inspektor-gadget, nvidia-device-plugin, metrics-server, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 00:35:00.031039 1203401 addons.go:502] enable addons completed in 1m31.438180679s: enabled=[storage-provisioner cloud-spanner ingress-dns inspektor-gadget nvidia-device-plugin metrics-server default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 00:35:00.031107 1203401 start.go:233] waiting for cluster config update ...
	I1101 00:35:00.031130 1203401 start.go:242] writing updated cluster config ...
	I1101 00:35:00.031502 1203401 ssh_runner.go:195] Run: rm -f paused
	I1101 00:35:00.116348 1203401 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 00:35:00.118738 1203401 out.go:177] * Done! kubectl is now configured to use "addons-864560" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 01 00:38:14 addons-864560 crio[893]: time="2023-11-01 00:38:14.681894331Z" level=info msg="Started container" PID=8594 containerID=4534cd71f83f2cc43f769d5066641af9646aff823d5a2849ade2d37979b1ece6 description=default/hello-world-app-5d77478584-c74sw/hello-world-app id=8a45204d-1147-41ee-a1f9-3215a64828e8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd40306ecb4264228c4cd162ab214efd74d1f05ef249dded15ddaf3edfe4ee9a
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.288344624Z" level=info msg="Removing container: 857007b3679d801ae3f13d5e19cd11bba4f2f8a7fddccc4d87e7f5ea50ba3e84" id=e9f21d1e-1930-461e-b658-963622b7ea49 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.311297860Z" level=info msg="Removed container 857007b3679d801ae3f13d5e19cd11bba4f2f8a7fddccc4d87e7f5ea50ba3e84: default/hello-world-app-5d77478584-c74sw/hello-world-app" id=e9f21d1e-1930-461e-b658-963622b7ea49 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.592454666Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=d4d28078-3a70-4370-be1d-7dbf3e474b20 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.592671789Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=d4d28078-3a70-4370-be1d-7dbf3e474b20 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.950741660Z" level=info msg="Removing container: 0e00c88ccfe4146f98283b028630f38f72d59feaf7c20b1a7c3157f57843ca8e" id=a3b9f718-d5f5-4842-aca6-0ba3bf7ebef2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.989973375Z" level=info msg="Removed container 0e00c88ccfe4146f98283b028630f38f72d59feaf7c20b1a7c3157f57843ca8e: ingress-nginx/ingress-nginx-admission-patch-jftwc/patch" id=a3b9f718-d5f5-4842-aca6-0ba3bf7ebef2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 00:38:15 addons-864560 crio[893]: time="2023-11-01 00:38:15.991469750Z" level=info msg="Removing container: 513e871b54fa31ab53cdc3c398f3eb38473263bbb89a41c383d67a85d6abb154" id=ac0468a0-a686-4845-894d-71d1c3b4ec53 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.026481269Z" level=info msg="Removed container 513e871b54fa31ab53cdc3c398f3eb38473263bbb89a41c383d67a85d6abb154: ingress-nginx/ingress-nginx-admission-create-zj9d9/create" id=ac0468a0-a686-4845-894d-71d1c3b4ec53 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.028080388Z" level=info msg="Stopping pod sandbox: e1bc327123037e6b69fad39477e39b39b9c8134dd08e2bbb79ae7214d025dadf" id=e4d55034-04b2-4a5b-8989-bcc0e687c0ee name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.028121241Z" level=info msg="Stopped pod sandbox (already stopped): e1bc327123037e6b69fad39477e39b39b9c8134dd08e2bbb79ae7214d025dadf" id=e4d55034-04b2-4a5b-8989-bcc0e687c0ee name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.028577566Z" level=info msg="Removing pod sandbox: e1bc327123037e6b69fad39477e39b39b9c8134dd08e2bbb79ae7214d025dadf" id=43697c22-f42e-40b3-8a3d-b5f323e1d0c1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.036617171Z" level=info msg="Removed pod sandbox: e1bc327123037e6b69fad39477e39b39b9c8134dd08e2bbb79ae7214d025dadf" id=43697c22-f42e-40b3-8a3d-b5f323e1d0c1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.037297781Z" level=info msg="Stopping pod sandbox: 35242be5ae40469204fd5484e628da37960ad74c06f4f4dab3ff22503de9dfdd" id=e4fce705-1798-4074-9e4b-6c1fd5f4c211 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.037334680Z" level=info msg="Stopped pod sandbox (already stopped): 35242be5ae40469204fd5484e628da37960ad74c06f4f4dab3ff22503de9dfdd" id=e4fce705-1798-4074-9e4b-6c1fd5f4c211 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.037838102Z" level=info msg="Removing pod sandbox: 35242be5ae40469204fd5484e628da37960ad74c06f4f4dab3ff22503de9dfdd" id=8dba30af-05ed-4cc5-a78a-d49787bfaa84 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.046495146Z" level=info msg="Removed pod sandbox: 35242be5ae40469204fd5484e628da37960ad74c06f4f4dab3ff22503de9dfdd" id=8dba30af-05ed-4cc5-a78a-d49787bfaa84 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.047126313Z" level=info msg="Stopping pod sandbox: 5af71a509f7d9bcb25fdace3f0b415c6ff8044a2b833e911a219d1db2f578f26" id=0840ab96-ee7e-4fac-9166-e29091388e2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.047247600Z" level=info msg="Stopped pod sandbox (already stopped): 5af71a509f7d9bcb25fdace3f0b415c6ff8044a2b833e911a219d1db2f578f26" id=0840ab96-ee7e-4fac-9166-e29091388e2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.047622900Z" level=info msg="Removing pod sandbox: 5af71a509f7d9bcb25fdace3f0b415c6ff8044a2b833e911a219d1db2f578f26" id=bad124ac-6a7e-4f90-af30-c3c2c8884038 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.056107081Z" level=info msg="Removed pod sandbox: 5af71a509f7d9bcb25fdace3f0b415c6ff8044a2b833e911a219d1db2f578f26" id=bad124ac-6a7e-4f90-af30-c3c2c8884038 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.056745500Z" level=info msg="Stopping pod sandbox: 6b7e16e73f33aeefe99ffeb714a140e82e9ce3f032067298e2dbd2c1e650f984" id=f2aee680-ed02-440b-9b5e-77b5b3f8a462 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.056883058Z" level=info msg="Stopped pod sandbox (already stopped): 6b7e16e73f33aeefe99ffeb714a140e82e9ce3f032067298e2dbd2c1e650f984" id=f2aee680-ed02-440b-9b5e-77b5b3f8a462 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.057350436Z" level=info msg="Removing pod sandbox: 6b7e16e73f33aeefe99ffeb714a140e82e9ce3f032067298e2dbd2c1e650f984" id=aed76e05-1a8c-4e08-a60b-1980efae2b2e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 00:38:16 addons-864560 crio[893]: time="2023-11-01 00:38:16.064913745Z" level=info msg="Removed pod sandbox: 6b7e16e73f33aeefe99ffeb714a140e82e9ce3f032067298e2dbd2c1e650f984" id=aed76e05-1a8c-4e08-a60b-1980efae2b2e name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4534cd71f83f2       6896cbb78303380df3c9b00dfb6032e56686e93920efa4526d79a51affb54816                                                   4 seconds ago        Exited              hello-world-app           2                   fd40306ecb426       hello-world-app-5d77478584-c74sw
	d10116b4659a2       ghcr.io/headlamp-k8s/headlamp@sha256:8e813897da00c345b1169d624b32e2367e5da1dbbffe33226f8a92973b816b50              About a minute ago   Running             headlamp                  0                   b984c8abb8580       headlamp-94b766c-vmgvf
	dd2aea2c626d3       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                    2 minutes ago        Running             nginx                     0                   a63c33447ea40       nginx
	c342ab2893865       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa       3 minutes ago        Running             gcp-auth                  0                   a74f12e03c1d2       gcp-auth-d4c87556c-9wvtj
	eb0325b8b6c1d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98   3 minutes ago        Running             local-path-provisioner    0                   f0cf5dd7a6f2a       local-path-provisioner-78b46b4d5c-ng2tp
	d97ff771c57e6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                   4 minutes ago        Running             storage-provisioner       0                   bf93f26b2e445       storage-provisioner
	20599ca8aced2       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                   4 minutes ago        Running             coredns                   0                   15890830a62c1       coredns-5dd5756b68-n8p8b
	c7ca2965f2cd5       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                   4 minutes ago        Running             kube-proxy                0                   f3058537003e0       kube-proxy-ffrlw
	448fb49c41941       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                   4 minutes ago        Running             kindnet-cni               0                   f5b505629cca2       kindnet-sx7k4
	6185357e5f17e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                   5 minutes ago        Running             etcd                      0                   a8be594d082de       etcd-addons-864560
	7bc4bf77cdab9       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                   5 minutes ago        Running             kube-controller-manager   0                   19f77f63336eb       kube-controller-manager-addons-864560
	7b747f411cc82       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                   5 minutes ago        Running             kube-scheduler            0                   6f3eeb8cf0dd2       kube-scheduler-addons-864560
	e2c2f8ab600d2       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                   5 minutes ago        Running             kube-apiserver            0                   4232795885f72       kube-apiserver-addons-864560
	
	* 
	* ==> coredns [20599ca8aced27cf298b9c4d7a450cd947eeed1af23e3a9951154992cc10455f] <==
	* [INFO] 10.244.0.18:43745 - 57955 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118194s
	[INFO] 10.244.0.18:37432 - 27023 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006366739s
	[INFO] 10.244.0.18:43745 - 56360 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001146494s
	[INFO] 10.244.0.18:43745 - 8285 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000868876s
	[INFO] 10.244.0.18:37432 - 60571 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001991912s
	[INFO] 10.244.0.18:37432 - 26372 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000147724s
	[INFO] 10.244.0.18:43745 - 19139 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000095991s
	[INFO] 10.244.0.18:48089 - 35451 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000110916s
	[INFO] 10.244.0.18:54494 - 62160 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069448s
	[INFO] 10.244.0.18:48089 - 24730 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000078252s
	[INFO] 10.244.0.18:48089 - 2269 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061972s
	[INFO] 10.244.0.18:48089 - 25887 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053571s
	[INFO] 10.244.0.18:48089 - 38244 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045234s
	[INFO] 10.244.0.18:48089 - 1062 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000045571s
	[INFO] 10.244.0.18:54494 - 13566 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000097074s
	[INFO] 10.244.0.18:54494 - 24548 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076742s
	[INFO] 10.244.0.18:48089 - 45537 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001391858s
	[INFO] 10.244.0.18:54494 - 20278 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000079556s
	[INFO] 10.244.0.18:54494 - 24205 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067421s
	[INFO] 10.244.0.18:48089 - 47682 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00103114s
	[INFO] 10.244.0.18:54494 - 19875 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000102087s
	[INFO] 10.244.0.18:48089 - 43982 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066461s
	[INFO] 10.244.0.18:54494 - 46128 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001027767s
	[INFO] 10.244.0.18:54494 - 500 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001087811s
	[INFO] 10.244.0.18:54494 - 17240 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065009s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-864560
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-864560
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=addons-864560
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_33_16_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864560
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:33:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864560
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:38:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:36:50 +0000   Wed, 01 Nov 2023 00:33:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:36:50 +0000   Wed, 01 Nov 2023 00:33:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:36:50 +0000   Wed, 01 Nov 2023 00:33:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:36:50 +0000   Wed, 01 Nov 2023 00:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-864560
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 46af9e3cacb64814b7ecde07ebad5751
	  System UUID:                0744d5a2-f4d4-402f-b753-73ea54531daf
	  Boot ID:                    11045d5e-2454-4ceb-8984-3078b90f4cad
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-c74sw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gcp-auth                    gcp-auth-d4c87556c-9wvtj                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  headlamp                    headlamp-94b766c-vmgvf                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 coredns-5dd5756b68-n8p8b                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m51s
	  kube-system                 etcd-addons-864560                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-sx7k4                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m51s
	  kube-system                 kube-apiserver-addons-864560               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-addons-864560      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-proxy-ffrlw                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-scheduler-addons-864560               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  local-path-storage          local-path-provisioner-78b46b4d5c-ng2tp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m46s  kube-proxy       
	  Normal  Starting                 5m4s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m4s   kubelet          Node addons-864560 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s   kubelet          Node addons-864560 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s   kubelet          Node addons-864560 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m52s  node-controller  Node addons-864560 event: Registered Node addons-864560 in Controller
	  Normal  NodeReady                4m20s  kubelet          Node addons-864560 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001069] FS-Cache: O-key=[8] 'b7623b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000008a5a3042
	[  +0.001086] FS-Cache: N-key=[8] 'b7623b0000000000'
	[  +0.003047] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000986] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000039455db
	[  +0.001156] FS-Cache: O-key=[8] 'b7623b0000000000'
	[  +0.000787] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000002d9a6a9a
	[  +0.001088] FS-Cache: N-key=[8] 'b7623b0000000000'
	[  +2.428786] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=000000000e72a3ec
	[  +0.001120] FS-Cache: O-key=[8] 'b6623b0000000000'
	[  +0.000729] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=00000000d8941740
	[  +0.001065] FS-Cache: N-key=[8] 'b6623b0000000000'
	[  +0.391461] FS-Cache: Duplicate cookie detected
	[  +0.000777] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001059] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000ef931e6f
	[  +0.001065] FS-Cache: O-key=[8] 'bc623b0000000000'
	[  +0.000725] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=0000000083fa4ef2
	[  +0.001088] FS-Cache: N-key=[8] 'bc623b0000000000'
	
	* 
	* ==> etcd [6185357e5f17e44b9e545253d5877c1a46be0605a7fcea285741e8ca115b3b66] <==
	* {"level":"info","ts":"2023-11-01T00:33:08.947129Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-01T00:33:09.417985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T00:33:09.418105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T00:33:09.418155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-11-01T00:33:09.418209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T00:33:09.418239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-01T00:33:09.418299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-11-01T00:33:09.418332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-01T00:33:09.42056Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:33:09.421282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:33:09.428634Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-01T00:33:09.421265Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-864560 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:33:09.428873Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:33:09.441687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:33:09.446027Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:33:09.446066Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T00:33:09.454091Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:33:09.45423Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:33:09.454294Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:33:32.651856Z","caller":"traceutil/trace.go:171","msg":"trace[792011453] linearizableReadLoop","detail":"{readStateIndex:409; appliedIndex:406; }","duration":"139.625298ms","start":"2023-11-01T00:33:32.512216Z","end":"2023-11-01T00:33:32.651842Z","steps":["trace[792011453] 'read index received'  (duration: 92.955877ms)","trace[792011453] 'applied index is now lower than readState.Index'  (duration: 46.668617ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-01T00:33:32.652059Z","caller":"traceutil/trace.go:171","msg":"trace[1241704875] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"142.642925ms","start":"2023-11-01T00:33:32.509408Z","end":"2023-11-01T00:33:32.65205Z","steps":["trace[1241704875] 'process raft request'  (duration: 95.805317ms)","trace[1241704875] 'compare'  (duration: 46.476683ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-01T00:33:32.652197Z","caller":"traceutil/trace.go:171","msg":"trace[908076558] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"140.09823ms","start":"2023-11-01T00:33:32.512094Z","end":"2023-11-01T00:33:32.652192Z","steps":["trace[908076558] 'process raft request'  (duration: 139.677942ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-01T00:33:32.652242Z","caller":"traceutil/trace.go:171","msg":"trace[881333431] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"140.069406ms","start":"2023-11-01T00:33:32.512168Z","end":"2023-11-01T00:33:32.652238Z","steps":["trace[881333431] 'process raft request'  (duration: 139.646435ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-01T00:33:32.652368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.139206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2023-11-01T00:33:32.652401Z","caller":"traceutil/trace.go:171","msg":"trace[2011867844] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:399; }","duration":"140.199694ms","start":"2023-11-01T00:33:32.512195Z","end":"2023-11-01T00:33:32.652394Z","steps":["trace[2011867844] 'agreement among raft nodes before linearized reading'  (duration: 140.12141ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [c342ab289386548a499e97f9a78f88489f679ac62b62dc6d113e260355651127] <==
	* 2023/11/01 00:34:49 GCP Auth Webhook started!
	2023/11/01 00:35:10 Ready to marshal response ...
	2023/11/01 00:35:10 Ready to write response ...
	2023/11/01 00:35:26 Ready to marshal response ...
	2023/11/01 00:35:26 Ready to write response ...
	2023/11/01 00:35:34 Ready to marshal response ...
	2023/11/01 00:35:34 Ready to write response ...
	2023/11/01 00:35:52 Ready to marshal response ...
	2023/11/01 00:35:52 Ready to write response ...
	2023/11/01 00:36:10 Ready to marshal response ...
	2023/11/01 00:36:10 Ready to write response ...
	2023/11/01 00:36:10 Ready to marshal response ...
	2023/11/01 00:36:10 Ready to write response ...
	2023/11/01 00:36:18 Ready to marshal response ...
	2023/11/01 00:36:18 Ready to write response ...
	2023/11/01 00:36:31 Ready to marshal response ...
	2023/11/01 00:36:31 Ready to write response ...
	2023/11/01 00:36:31 Ready to marshal response ...
	2023/11/01 00:36:31 Ready to write response ...
	2023/11/01 00:36:31 Ready to marshal response ...
	2023/11/01 00:36:31 Ready to write response ...
	2023/11/01 00:37:53 Ready to marshal response ...
	2023/11/01 00:37:53 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:38:19 up  8:20,  0 users,  load average: 0.24, 1.19, 1.87
	Linux addons-864560 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [448fb49c419411a3f4c9e0bb049a455e72b1fae24b8bb8e400da39ad5827bd11] <==
	* I1101 00:36:19.065760       1 main.go:227] handling current node
	I1101 00:36:29.078108       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:36:29.078137       1 main.go:227] handling current node
	I1101 00:36:39.082138       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:36:39.082168       1 main.go:227] handling current node
	I1101 00:36:49.092783       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:36:49.092882       1 main.go:227] handling current node
	I1101 00:36:59.104768       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:36:59.104800       1 main.go:227] handling current node
	I1101 00:37:09.109722       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:37:09.109751       1 main.go:227] handling current node
	I1101 00:37:19.122982       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:37:19.123082       1 main.go:227] handling current node
	I1101 00:37:29.127816       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:37:29.127912       1 main.go:227] handling current node
	I1101 00:37:39.139931       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:37:39.139964       1 main.go:227] handling current node
	I1101 00:37:49.149554       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:37:49.149581       1 main.go:227] handling current node
	I1101 00:37:59.162566       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:37:59.162594       1 main.go:227] handling current node
	I1101 00:38:09.167402       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:38:09.167433       1 main.go:227] handling current node
	I1101 00:38:19.179832       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:38:19.179862       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [e2c2f8ab600d2d454a851abe4c47dfdad3d39825c9739ac5b70cca92f1b7b5a2] <==
	* I1101 00:35:28.463894       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1101 00:35:29.487593       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1101 00:35:34.481789       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 00:35:34.879124       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.18.221"}
	I1101 00:35:38.991922       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 00:36:09.553501       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.554246       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 00:36:09.567675       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.568333       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 00:36:09.585822       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.586076       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 00:36:09.605661       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.605741       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 00:36:09.663795       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.663857       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 00:36:09.664128       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.664170       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 00:36:09.686819       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 00:36:09.686878       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1101 00:36:10.659317       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1101 00:36:10.664623       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1101 00:36:10.724858       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1101 00:36:15.959132       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 00:36:31.440689       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.72.51"}
	I1101 00:37:54.149803       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.183.192"}
	
	* 
	* ==> kube-controller-manager [7bc4bf77cdab9165bdc8e224607316367d81710ad9cc6dd015d4a46c208d7be5] <==
	* W1101 00:37:26.242744       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1101 00:37:26.242774       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1101 00:37:34.887808       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1101 00:37:34.887932       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1101 00:37:40.879792       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1101 00:37:40.879847       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1101 00:37:53.909503       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1101 00:37:53.937767       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-c74sw"
	I1101 00:37:53.965239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="60.695345ms"
	I1101 00:37:53.973035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.740941ms"
	I1101 00:37:53.973229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.405µs"
	I1101 00:37:53.976255       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="118.76µs"
	I1101 00:37:57.261941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.447µs"
	I1101 00:37:58.262157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="66.855µs"
	W1101 00:37:58.413116       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1101 00:37:58.413147       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1101 00:37:59.261448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.244µs"
	I1101 00:38:10.979297       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1101 00:38:10.983858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="6.055µs"
	I1101 00:38:10.992863       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1101 00:38:15.301657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="166.456µs"
	W1101 00:38:17.643337       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1101 00:38:17.643373       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1101 00:38:18.172041       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1101 00:38:18.172088       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [c7ca2965f2cd5d052008b96e0793a760a851869649cd073e67bc09e483b8c99f] <==
	* I1101 00:33:28.830228       1 server_others.go:69] "Using iptables proxy"
	I1101 00:33:32.170403       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1101 00:33:33.067922       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 00:33:33.070567       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:33:33.070664       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 00:33:33.070838       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 00:33:33.070943       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:33:33.071273       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:33:33.071492       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:33:33.072273       1 config.go:188] "Starting service config controller"
	I1101 00:33:33.072351       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:33:33.072395       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:33:33.072422       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:33:33.072996       1 config.go:315] "Starting node config controller"
	I1101 00:33:33.073567       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:33:33.273714       1 shared_informer.go:318] Caches are synced for node config
	I1101 00:33:33.298735       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:33:33.321316       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [7b747f411cc82ccfc9cbc0ca6550fa332a3e3e402d8b8ea96fb7fa96d7198f1b] <==
	* W1101 00:33:13.010118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 00:33:13.010746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 00:33:13.010195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 00:33:13.010841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 00:33:13.010261       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:33:13.011016       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 00:33:13.010575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 00:33:13.011140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 00:33:13.013384       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 00:33:13.013466       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:33:13.015619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 00:33:13.015698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1101 00:33:13.015824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:33:13.015893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 00:33:13.015980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:33:13.016019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 00:33:13.016247       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 00:33:13.016308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 00:33:13.016411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:33:13.016467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1101 00:33:13.016559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 00:33:13.016597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 00:33:13.016853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:33:13.016915       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1101 00:33:14.203111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.739641    1365 manager.go:1106] Failed to create existing container: /docker/6fe6e254754b93bfcbc495fbadb5f3dc9871283f2483647dcf4158eb9db397f5/crio-33030fcbfff6f90cfb1680e0c1ce53db1506a42db01b4d225cba28680aa31cca: Error finding container 33030fcbfff6f90cfb1680e0c1ce53db1506a42db01b4d225cba28680aa31cca: Status 404 returned error can't find the container with id 33030fcbfff6f90cfb1680e0c1ce53db1506a42db01b4d225cba28680aa31cca
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.748187    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2be35c45f3512367289ce789caadedbdb0aa97e007f977e9e71a18bdf5db145e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2be35c45f3512367289ce789caadedbdb0aa97e007f977e9e71a18bdf5db145e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.753661    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1b33639d8e660e54cdd8c782929222cce6f1944fc7989d9f20bacb598e88b41a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1b33639d8e660e54cdd8c782929222cce6f1944fc7989d9f20bacb598e88b41a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.758942    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0fa96468e0f7004dd70436a1dace157e4485da3e67a9a27934a345ac6dcedbd0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0fa96468e0f7004dd70436a1dace157e4485da3e67a9a27934a345ac6dcedbd0/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.760068    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0fa96468e0f7004dd70436a1dace157e4485da3e67a9a27934a345ac6dcedbd0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0fa96468e0f7004dd70436a1dace157e4485da3e67a9a27934a345ac6dcedbd0/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.767926    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2be35c45f3512367289ce789caadedbdb0aa97e007f977e9e71a18bdf5db145e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2be35c45f3512367289ce789caadedbdb0aa97e007f977e9e71a18bdf5db145e/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.777800    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0cc7d1981c8b2f1f7837a28471e8937773bffa2dd62ddabe2cf08bf72a71ce26/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0cc7d1981c8b2f1f7837a28471e8937773bffa2dd62ddabe2cf08bf72a71ce26/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.778884    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d93c807fcb7404e435b3f68defd3febb328d748238cbb8f320d3074730860d01/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d93c807fcb7404e435b3f68defd3febb328d748238cbb8f320d3074730860d01/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.781065    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d93c807fcb7404e435b3f68defd3febb328d748238cbb8f320d3074730860d01/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d93c807fcb7404e435b3f68defd3febb328d748238cbb8f320d3074730860d01/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.787335    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bb4184a2cf4f6792ff497d3e1d20753c37c432ffc5d9864caa8979f7d24ed5a0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bb4184a2cf4f6792ff497d3e1d20753c37c432ffc5d9864caa8979f7d24ed5a0/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.792590    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/df843c61bcfeea92dbf296925bd8eb2a0d960be25d1aa506d5bc13d0e7eded39/diff" to get inode usage: stat /var/lib/containers/storage/overlay/df843c61bcfeea92dbf296925bd8eb2a0d960be25d1aa506d5bc13d0e7eded39/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.793701    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0cdbdb9e76f871ac7cff2c2335e997957beb49ae8dd33e8a100130b89964c5ea/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0cdbdb9e76f871ac7cff2c2335e997957beb49ae8dd33e8a100130b89964c5ea/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.797989    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e5a1390b8864aa40f788c86e44e84a0b56999025ef673890377cc204d088d06a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e5a1390b8864aa40f788c86e44e84a0b56999025ef673890377cc204d088d06a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.803384    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f9ec2b7a6e04b449ee58b432feddd2eb6d4b5af2440b565179e0c58c46799b70/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f9ec2b7a6e04b449ee58b432feddd2eb6d4b5af2440b565179e0c58c46799b70/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.809349    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f9ec2b7a6e04b449ee58b432feddd2eb6d4b5af2440b565179e0c58c46799b70/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f9ec2b7a6e04b449ee58b432feddd2eb6d4b5af2440b565179e0c58c46799b70/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.814557    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/df843c61bcfeea92dbf296925bd8eb2a0d960be25d1aa506d5bc13d0e7eded39/diff" to get inode usage: stat /var/lib/containers/storage/overlay/df843c61bcfeea92dbf296925bd8eb2a0d960be25d1aa506d5bc13d0e7eded39/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.819874    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ada2fbf2475f1c16a0ffd89d5fec848827cfed86df9926c5e0367ab16f72931b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ada2fbf2475f1c16a0ffd89d5fec848827cfed86df9926c5e0367ab16f72931b/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.827098    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a05f1dd1df24b348803c46ec7652cffe23b378ecb01b9d09f4ecd220e72c9be/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a05f1dd1df24b348803c46ec7652cffe23b378ecb01b9d09f4ecd220e72c9be/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.833321    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0cc7d1981c8b2f1f7837a28471e8937773bffa2dd62ddabe2cf08bf72a71ce26/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0cc7d1981c8b2f1f7837a28471e8937773bffa2dd62ddabe2cf08bf72a71ce26/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.835484    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5adeee816f7b4e643602b389d6d1e77ecc38d16211211357cd63a44e575f19c0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5adeee816f7b4e643602b389d6d1e77ecc38d16211211357cd63a44e575f19c0/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.835488    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a05f1dd1df24b348803c46ec7652cffe23b378ecb01b9d09f4ecd220e72c9be/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a05f1dd1df24b348803c46ec7652cffe23b378ecb01b9d09f4ecd220e72c9be/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.836598    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fa89d5021fde73134e423269e043474dfa22bcef3e9ae110e5249acc0c6f1bd0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fa89d5021fde73134e423269e043474dfa22bcef3e9ae110e5249acc0c6f1bd0/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: E1101 00:38:15.840948    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e5a1390b8864aa40f788c86e44e84a0b56999025ef673890377cc204d088d06a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e5a1390b8864aa40f788c86e44e84a0b56999025ef673890377cc204d088d06a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 01 00:38:15 addons-864560 kubelet[1365]: I1101 00:38:15.949644    1365 scope.go:117] "RemoveContainer" containerID="0e00c88ccfe4146f98283b028630f38f72d59feaf7c20b1a7c3157f57843ca8e"
	Nov 01 00:38:15 addons-864560 kubelet[1365]: I1101 00:38:15.990338    1365 scope.go:117] "RemoveContainer" containerID="513e871b54fa31ab53cdc3c398f3eb38473263bbb89a41c383d67a85d6abb154"
	
	* 
	* ==> storage-provisioner [d97ff771c57e65a2431c3a19eb071573167a1534e6a3a30e56c3d672cf6b5009] <==
	* I1101 00:34:00.602097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 00:34:00.617600       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 00:34:00.617676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 00:34:00.634832       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 00:34:00.635012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-864560_d112803b-724c-45d5-9b2a-6597e37d226a!
	I1101 00:34:00.635937       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25ecc663-6a98-43d9-b461-5eb0b081acde", APIVersion:"v1", ResourceVersion:"849", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-864560_d112803b-724c-45d5-9b2a-6597e37d226a became leader
	I1101 00:34:00.735922       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-864560_d112803b-724c-45d5-9b2a-6597e37d226a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-864560 -n addons-864560
helpers_test.go:261: (dbg) Run:  kubectl --context addons-864560 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (167.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6d311654-9e06-4162-b404-d19368993566] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.037952332s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-258660 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-258660 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-258660 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-258660 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1e7cc846-fdbf-43fe-ae18-c257e016fc6b] Pending
helpers_test.go:344: "sp-pod" [1e7cc846-fdbf-43fe-ae18-c257e016fc6b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258660 -n functional-258660
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-11-01 00:45:08.29831187 +0000 UTC m=+784.844347632
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-258660 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-258660 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-258660/192.168.49.2
Start Time:       Wed, 01 Nov 2023 00:42:07 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhp7m (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-xhp7m:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-258660
Warning  Failed     109s (x2 over 2m30s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     52s (x3 over 2m30s)   kubelet            Error: ErrImagePull
Warning  Failed     52s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:803c351998abcb39e6a20d90b8369f66605e2e87bb7f8e9a4f500738836404e7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    17s (x5 over 2m30s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     17s (x5 over 2m30s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    6s (x4 over 3m)       kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-258660 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-258660 logs sp-pod -n default: exit status 1 (107.089511ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-258660 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-258660
helpers_test.go:235: (dbg) docker inspect functional-258660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1",
	        "Created": "2023-11-01T00:39:46.592250202Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1218872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T00:39:46.924779747Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bd2c3f7c992aecdf624ceae92825f3a10bf56bd552768efdb49aafbacd808193",
	        "ResolvConfPath": "/var/lib/docker/containers/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/hosts",
	        "LogPath": "/var/lib/docker/containers/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1-json.log",
	        "Name": "/functional-258660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-258660:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-258660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3a900ccef3d4746e3c8c4a9d432ad09237d5a126e3c331763caeae796ca2fba1-init/diff:/var/lib/docker/overlay2/d052914c945f7ab680be56190d2f2374e48b87c8da40d55e2692538d0bc19343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a900ccef3d4746e3c8c4a9d432ad09237d5a126e3c331763caeae796ca2fba1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a900ccef3d4746e3c8c4a9d432ad09237d5a126e3c331763caeae796ca2fba1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a900ccef3d4746e3c8c4a9d432ad09237d5a126e3c331763caeae796ca2fba1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-258660",
	                "Source": "/var/lib/docker/volumes/functional-258660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-258660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-258660",
	                "name.minikube.sigs.k8s.io": "functional-258660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62de7f4e20485269c5f7934f7ffe91a33ecae5425a5aaf584c5ece483e98a3b3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34302"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34301"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34298"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34300"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34299"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/62de7f4e2048",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-258660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "76e9c18a9847",
	                        "functional-258660"
	                    ],
	                    "NetworkID": "fdf3b911851b6334946afee3eaae0c3b24e71e887cd07da8b6d9731a04449b4b",
	                    "EndpointID": "afbfde1cafc8893962afb5d9bbf6bf3c2ef8f5bbb180a356e5d8e37be53da4b8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-258660 -n functional-258660
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 logs -n 25: (1.945152492s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
	| image          | functional-258660 image load --daemon                                  | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image save                                           | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                   |         |                |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| image          | functional-258660 image rm                                             | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image load                                           | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image save --daemon                                  | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/1202897.pem                                             |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /usr/share/ca-certificates/1202897.pem                                 |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/12028972.pem                                            |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /usr/share/ca-certificates/12028972.pem                                |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/test/nested/copy/1202897/hosts                                    |                   |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format short                                                |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format yaml                                                 |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| ssh            | functional-258660 ssh pgrep                                            | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC |                     |
	|                | buildkitd                                                              |                   |         |                |                     |                     |
	| image          | functional-258660 image build -t                                       | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | localhost/my-image:functional-258660                                   |                   |         |                |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	| image          | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format json                                                 |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format table                                                |                   |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                   |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                   |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                   |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |                |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:43:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:43:29.971305 1228731 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:43:29.971489 1228731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:29.971502 1228731 out.go:309] Setting ErrFile to fd 2...
	I1101 00:43:29.971510 1228731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:29.971800 1228731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:43:29.972311 1228731 out.go:303] Setting JSON to false
	I1101 00:43:29.973368 1228731 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30357,"bootTime":1698769053,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:43:29.973442 1228731 start.go:138] virtualization:  
	I1101 00:43:29.976078 1228731 out.go:177] * [functional-258660] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:43:29.978339 1228731 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:43:29.980129 1228731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:43:29.978450 1228731 notify.go:220] Checking for updates...
	I1101 00:43:29.984036 1228731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:43:29.985906 1228731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:43:29.987431 1228731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 00:43:29.989073 1228731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:43:29.991327 1228731 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:43:29.992045 1228731 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:43:30.022554 1228731 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:43:30.022695 1228731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:43:30.119165 1228731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 00:43:30.10739331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:43:30.119282 1228731 docker.go:295] overlay module found
	I1101 00:43:30.121135 1228731 out.go:177] * Using the docker driver based on existing profile
	I1101 00:43:30.123251 1228731 start.go:298] selected driver: docker
	I1101 00:43:30.123267 1228731 start.go:902] validating driver "docker" against &{Name:functional-258660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-258660 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:43:30.123396 1228731 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:43:30.123493 1228731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:43:30.195926 1228731 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 00:43:30.186063737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:43:30.196367 1228731 cni.go:84] Creating CNI manager for ""
	I1101 00:43:30.196384 1228731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:43:30.196398 1228731 start_flags.go:323] config:
	{Name:functional-258660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-258660 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:43:30.198541 1228731 out.go:177] * dry-run validation complete!
	
	* 
	* ==> CRI-O <==
	* Nov 01 00:43:38 functional-258660 crio[4504]: time="2023-11-01 00:43:38.251480248Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 01 00:43:38 functional-258660 crio[4504]: time="2023-11-01 00:43:38.275430449Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a47e10d77c3494f907eaa0523e1ce8e9ed565cecb8f176bd05c12e4985907605/merged/etc/group: no such file or directory"
	Nov 01 00:43:38 functional-258660 crio[4504]: time="2023-11-01 00:43:38.357670343Z" level=info msg="Created container 5bb4de34e4cf4b8f10247e91c1e46265560431e1b598c257020c41c056859b78: kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-vh6ln/dashboard-metrics-scraper" id=2c360cc2-6d64-4337-8208-265b9e7adfdc name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 00:43:38 functional-258660 crio[4504]: time="2023-11-01 00:43:38.360545218Z" level=info msg="Starting container: 5bb4de34e4cf4b8f10247e91c1e46265560431e1b598c257020c41c056859b78" id=21f7bd6d-d6bb-43e2-9c87-7deaa38b56c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 00:43:38 functional-258660 crio[4504]: time="2023-11-01 00:43:38.375571866Z" level=info msg="Started container" PID=6816 containerID=5bb4de34e4cf4b8f10247e91c1e46265560431e1b598c257020c41c056859b78 description=kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-vh6ln/dashboard-metrics-scraper id=21f7bd6d-d6bb-43e2-9c87-7deaa38b56c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=476b4f4a1ce922d8654cbe7ab17ef232c0301d793e02e6282eda42965385dc16
	Nov 01 00:43:43 functional-258660 crio[4504]: time="2023-11-01 00:43:43.084770627Z" level=info msg="Checking image status: gcr.io/google-containers/addon-resizer:functional-258660" id=b2530a8a-239f-460a-bda3-353e8b93cce4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:43 functional-258660 crio[4504]: time="2023-11-01 00:43:43.085035511Z" level=info msg="Image gcr.io/google-containers/addon-resizer:functional-258660 not found" id=b2530a8a-239f-460a-bda3-353e8b93cce4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:45 functional-258660 crio[4504]: time="2023-11-01 00:43:45.350658189Z" level=info msg="Checking image status: docker.io/nginx:latest" id=8a5eeb3a-faef-4b42-95d5-9c922593c0b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:45 functional-258660 crio[4504]: time="2023-11-01 00:43:45.350888735Z" level=info msg="Image docker.io/nginx:latest not found" id=8a5eeb3a-faef-4b42-95d5-9c922593c0b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:45 functional-258660 crio[4504]: time="2023-11-01 00:43:45.352055939Z" level=info msg="Pulling image: docker.io/nginx:latest" id=22cf7917-2ad9-4938-8611-3c48ebea1293 name=/runtime.v1.ImageService/PullImage
	Nov 01 00:43:45 functional-258660 crio[4504]: time="2023-11-01 00:43:45.353223545Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 01 00:43:51 functional-258660 crio[4504]: time="2023-11-01 00:43:51.569857710Z" level=info msg="Checking image status: gcr.io/google-containers/addon-resizer:functional-258660" id=5387e228-4869-414d-8d1e-ea7f9a42802e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:51 functional-258660 crio[4504]: time="2023-11-01 00:43:51.570076589Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd,RepoTags:[gcr.io/google-containers/addon-resizer:functional-258660],RepoDigests:[gcr.io/google-containers/addon-resizer@sha256:2a8d4b63cfef57ff8da6bfa7a54875094128c3477d8ebde545a5f4e2465e35b3],Size_:40216491,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5387e228-4869-414d-8d1e-ea7f9a42802e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:54 functional-258660 crio[4504]: time="2023-11-01 00:43:54.043792802Z" level=info msg="Checking image status: gcr.io/google-containers/addon-resizer:functional-258660" id=0e2b6595-fb9e-4d9b-965f-1806e6cb09f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:43:54 functional-258660 crio[4504]: time="2023-11-01 00:43:54.044017473Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91,RepoTags:[gcr.io/google-containers/addon-resizer:functional-258660],RepoDigests:[gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126],Size_:34114467,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0e2b6595-fb9e-4d9b-965f-1806e6cb09f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:44:27 functional-258660 crio[4504]: time="2023-11-01 00:44:27.350422594Z" level=info msg="Checking image status: docker.io/nginx:latest" id=6752c018-cfc5-40a0-894d-3ea22163b9cf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:44:27 functional-258660 crio[4504]: time="2023-11-01 00:44:27.350653287Z" level=info msg="Image docker.io/nginx:latest not found" id=6752c018-cfc5-40a0-894d-3ea22163b9cf name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:44:38 functional-258660 crio[4504]: time="2023-11-01 00:44:38.350012597Z" level=info msg="Checking image status: docker.io/nginx:latest" id=95d6fe79-f6dd-4070-9daf-a898bd5a8ed3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:44:38 functional-258660 crio[4504]: time="2023-11-01 00:44:38.350237793Z" level=info msg="Image docker.io/nginx:latest not found" id=95d6fe79-f6dd-4070-9daf-a898bd5a8ed3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:44:51 functional-258660 crio[4504]: time="2023-11-01 00:44:51.353187963Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7c3b0ec4-5a84-48ca-b750-348dc422db46 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:44:51 functional-258660 crio[4504]: time="2023-11-01 00:44:51.353418107Z" level=info msg="Image docker.io/nginx:latest not found" id=7c3b0ec4-5a84-48ca-b750-348dc422db46 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:45:02 functional-258660 crio[4504]: time="2023-11-01 00:45:02.351214516Z" level=info msg="Checking image status: docker.io/nginx:latest" id=39cc3d51-9a11-4826-8733-9f6c60113b4d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:45:02 functional-258660 crio[4504]: time="2023-11-01 00:45:02.351453686Z" level=info msg="Image docker.io/nginx:latest not found" id=39cc3d51-9a11-4826-8733-9f6c60113b4d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 00:45:02 functional-258660 crio[4504]: time="2023-11-01 00:45:02.351943758Z" level=info msg="Pulling image: docker.io/nginx:latest" id=c2580a86-fa79-473e-ac36-61393ca7bbfc name=/runtime.v1.ImageService/PullImage
	Nov 01 00:45:02 functional-258660 crio[4504]: time="2023-11-01 00:45:02.354093191Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	5bb4de34e4cf4       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   About a minute ago   Running             dashboard-metrics-scraper   0                   476b4f4a1ce92       dashboard-metrics-scraper-7fd5cb4ddc-vh6ln
	c1a4d1df20141       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   e21c106be2e43       kubernetes-dashboard-8694d4445c-567mr
	30dde43e6bfe9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   203487028a9e7       busybox-mount
	88da2e55d5563       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                                 2 minutes ago        Running             echoserver-arm              0                   5ff231c3f54d3       hello-node-759d89bdcc-s5xsm
	99c1ff75d50c1       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5           2 minutes ago        Running             echoserver-arm              0                   9fdd231694529       hello-node-connect-7799dfb7c6-pzbht
	29e04a270b516       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                  3 minutes ago        Running             nginx                       0                   8d900d7e2bc12       nginx-svc
	0091d2bd9aa8b       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                 3 minutes ago        Running             kube-proxy                  3                   1ac4845738736       kube-proxy-gdrjs
	b1a9f4542d59c       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                 3 minutes ago        Running             kindnet-cni                 3                   172532e73b254       kindnet-gplw7
	2bca3f804b504       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago        Running             storage-provisioner         4                   f17ab81187244       storage-provisioner
	86dcca3459ccb       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                 3 minutes ago        Running             coredns                     3                   083f405f0a330       coredns-5dd5756b68-7vkm8
	307d2b6ef007c       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                 3 minutes ago        Running             kube-apiserver              0                   71e4f374dd9cb       kube-apiserver-functional-258660
	53d900ce0a3ef       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                 3 minutes ago        Running             etcd                        3                   bf8a4324c2ddd       etcd-functional-258660
	826c05cd07f8d       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                 3 minutes ago        Running             kube-scheduler              3                   a2b4dfaf7cf3f       kube-scheduler-functional-258660
	fe4b3a8e4e007       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                 3 minutes ago        Running             kube-controller-manager     3                   fa79db6248f04       kube-controller-manager-functional-258660
	a5fdfc7a9e8ab       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago        Exited              storage-provisioner         3                   f17ab81187244       storage-provisioner
	29adf0b616acb       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                 4 minutes ago        Exited              coredns                     2                   083f405f0a330       coredns-5dd5756b68-7vkm8
	3ec7ba86cafff       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                 4 minutes ago        Exited              kindnet-cni                 2                   172532e73b254       kindnet-gplw7
	9bf7c8b13babc       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                 4 minutes ago        Exited              kube-scheduler              2                   a2b4dfaf7cf3f       kube-scheduler-functional-258660
	718aa2535b75e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                 4 minutes ago        Exited              etcd                        2                   bf8a4324c2ddd       etcd-functional-258660
	bd8e7fb775502       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                 4 minutes ago        Exited              kube-controller-manager     2                   fa79db6248f04       kube-controller-manager-functional-258660
	a3ee421b4af8a       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                 4 minutes ago        Exited              kube-proxy                  2                   1ac4845738736       kube-proxy-gdrjs
	
	* 
	* ==> coredns [29adf0b616acb23a8c8993e2d1af55ea0790f6f66a1c212aa7f7c5d6da1dcc50] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41037 - 59911 "HINFO IN 5119387479118527378.1927566891556418920. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024766179s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [86dcca3459ccbc9d930febf6d33d1634f179cc5ad576760992af1653cc5c26a2] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40739 - 23961 "HINFO IN 6899794629071403592.1181857729062252931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025500376s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-258660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-258660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=functional-258660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_40_09_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:40:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-258660
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:44:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:44:40 +0000   Wed, 01 Nov 2023 00:40:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:44:40 +0000   Wed, 01 Nov 2023 00:40:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:44:40 +0000   Wed, 01 Nov 2023 00:40:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:44:40 +0000   Wed, 01 Nov 2023 00:40:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-258660
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 6278c2e49d7e4f56bc021a26d3f534c1
	  System UUID:                29def205-1752-46a4-8780-42567d43f015
	  Boot ID:                    11045d5e-2454-4ceb-8984-3078b90f4cad
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-s5xsm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  default                     hello-node-connect-7799dfb7c6-pzbht           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	  kube-system                 coredns-5dd5756b68-7vkm8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m49s
	  kube-system                 etcd-functional-258660                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-gplw7                                 100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m50s
	  kube-system                 kube-apiserver-functional-258660              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-controller-manager-functional-258660     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-gdrjs                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-functional-258660              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-vh6ln    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-567mr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  Starting                 3m32s                  kube-proxy       
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node functional-258660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node functional-258660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m10s (x8 over 5m10s)  kubelet          Node functional-258660 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m2s                   kubelet          Node functional-258660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s                   kubelet          Node functional-258660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s                   kubelet          Node functional-258660 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m2s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m51s                  node-controller  Node functional-258660 event: Registered Node functional-258660 in Controller
	  Normal  NodeReady                4m48s                  kubelet          Node functional-258660 status is now: NodeReady
	  Normal  RegisteredNode           4m4s                   node-controller  Node functional-258660 event: Registered Node functional-258660 in Controller
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m39s (x8 over 3m39s)  kubelet          Node functional-258660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x8 over 3m39s)  kubelet          Node functional-258660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x8 over 3m39s)  kubelet          Node functional-258660 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m22s                  node-controller  Node functional-258660 event: Registered Node functional-258660 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000767] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001063] FS-Cache: N-key=[8] '70643b0000000000'
	[  +0.004430] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000527cc4c3
	[  +0.001080] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000008a5a3042
	[  +0.001070] FS-Cache: N-key=[8] '70643b0000000000'
	[  +2.029136] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001008] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000d9fe484b
	[  +0.001140] FS-Cache: O-key=[8] '6f643b0000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001074] FS-Cache: N-key=[8] '6f643b0000000000'
	[  +0.310063] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=000000005bafb08b
	[  +0.001102] FS-Cache: O-key=[8] '75643b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=00000000763bdf7d
	[  +0.001071] FS-Cache: N-key=[8] '75643b0000000000'
	
	* 
	* ==> etcd [53d900ce0a3ef284ae9adc695895bef21f72e5df173e8c0b08e2af182c13ad7d] <==
	* {"level":"info","ts":"2023-11-01T00:41:32.437376Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-01T00:41:32.437452Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-01T00:41:32.437701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-01T00:41:32.438314Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-01T00:41:32.438609Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:41:32.438686Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:41:32.451047Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-01T00:41:32.451144Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-01T00:41:32.469148Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:41:32.469299Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:41:32.469337Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T00:41:33.489023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-11-01T00:41:33.489072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:41:33.4891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-01T00:41:33.489113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-11-01T00:41:33.489128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-01T00:41:33.489139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-11-01T00:41:33.489154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-01T00:41:33.501213Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-258660 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:41:33.501323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:41:33.502465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-01T00:41:33.503111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:41:33.509923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:41:33.525018Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:41:33.525057Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [718aa2535b75e6b220b24b54648a2d67b89faa379789988a2572b1aabcdfcbc5] <==
	* {"level":"info","ts":"2023-11-01T00:40:51.124224Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T00:40:52.971013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-01T00:40:52.971102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-01T00:40:52.971157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-01T00:40:52.971196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-11-01T00:40:52.971228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-01T00:40:52.971269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-11-01T00:40:52.971306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-01T00:40:52.977203Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-258660 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T00:40:52.97739Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:40:52.978381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-01T00:40:52.978488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T00:40:52.97934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T00:40:52.990904Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T00:40:52.990948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-01T00:41:20.904026Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-01T00:41:20.904075Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-258660","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-11-01T00:41:20.904153Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:41:20.904223Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:41:20.951346Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-01T00:41:20.951395Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-01T00:41:20.951446Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-11-01T00:41:20.953936Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-01T00:41:20.954029Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-01T00:41:20.954042Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-258660","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  00:45:10 up  8:27,  0 users,  load average: 0.47, 1.07, 1.60
	Linux functional-258660 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3ec7ba86cafffa606af084d955a06536f744dca10b45296984c3101aeda1b0bc] <==
	* I1101 00:40:50.997684       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1101 00:40:50.997765       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 00:40:50.997922       1 main.go:116] setting mtu 1500 for CNI 
	I1101 00:40:50.997934       1 main.go:146] kindnetd IP family: "ipv4"
	I1101 00:40:50.997946       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1101 00:40:55.546743       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:40:55.546777       1 main.go:227] handling current node
	I1101 00:41:05.558442       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:41:05.558472       1 main.go:227] handling current node
	I1101 00:41:15.568849       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:41:15.568881       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [b1a9f4542d59caf0871978e2274ab4ffefc2f2b4fcc78fd156a791700e7a1a0a] <==
	* I1101 00:43:07.405627       1 main.go:227] handling current node
	I1101 00:43:17.418114       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:43:17.418143       1 main.go:227] handling current node
	I1101 00:43:27.422661       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:43:27.422781       1 main.go:227] handling current node
	I1101 00:43:37.428467       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:43:37.428563       1 main.go:227] handling current node
	I1101 00:43:47.436888       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:43:47.436916       1 main.go:227] handling current node
	I1101 00:43:57.449116       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:43:57.449147       1 main.go:227] handling current node
	I1101 00:44:07.460057       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:44:07.460084       1 main.go:227] handling current node
	I1101 00:44:17.463776       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:44:17.463807       1 main.go:227] handling current node
	I1101 00:44:27.473780       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:44:27.473812       1 main.go:227] handling current node
	I1101 00:44:37.477832       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:44:37.477865       1 main.go:227] handling current node
	I1101 00:44:47.481925       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:44:47.481952       1 main.go:227] handling current node
	I1101 00:44:57.494102       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:44:57.494132       1 main.go:227] handling current node
	I1101 00:45:07.500591       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:45:07.500626       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [307d2b6ef007c7bfd4dd28f2734096e652da456e4be4d0a1ec071182a1555f01] <==
	* I1101 00:41:36.289663       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 00:41:36.289748       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 00:41:36.305636       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 00:41:36.324051       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 00:41:36.324180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:41:36.324866       1 aggregator.go:166] initial CRD sync complete...
	I1101 00:41:36.324914       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 00:41:36.324943       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 00:41:36.324978       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:41:36.995925       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 00:41:38.593280       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 00:41:38.722502       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 00:41:38.732849       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 00:41:38.798178       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:41:38.806875       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 00:41:54.552124       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 00:41:55.864681       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.163.77"}
	I1101 00:41:55.883195       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:42:03.008895       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.89.163"}
	I1101 00:42:13.478263       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 00:42:13.633633       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.207.138"}
	I1101 00:42:50.282559       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.227.218"}
	I1101 00:43:31.563126       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 00:43:31.885281       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.32.211"}
	I1101 00:43:31.906889       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.240.118"}
	
	* 
	* ==> kube-controller-manager [bd8e7fb77550263f1d5afaf8009191b47f9d53d4b8a9b83bbe33ace39b3b6aee] <==
	* I1101 00:41:06.792407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.507232ms"
	I1101 00:41:06.792572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.004µs"
	I1101 00:41:06.997387       1 shared_informer.go:318] Caches are synced for HPA
	I1101 00:41:07.066960       1 shared_informer.go:318] Caches are synced for stateful set
	I1101 00:41:07.068150       1 shared_informer.go:318] Caches are synced for PVC protection
	I1101 00:41:07.097056       1 shared_informer.go:318] Caches are synced for attach detach
	I1101 00:41:07.116132       1 shared_informer.go:318] Caches are synced for persistent volume
	I1101 00:41:07.123824       1 shared_informer.go:318] Caches are synced for expand
	I1101 00:41:07.130316       1 shared_informer.go:318] Caches are synced for ephemeral
	I1101 00:41:07.221734       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1101 00:41:07.225521       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:41:07.225550       1 shared_informer.go:318] Caches are synced for job
	I1101 00:41:07.241886       1 shared_informer.go:318] Caches are synced for cronjob
	I1101 00:41:07.260837       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:41:07.261934       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 00:41:07.273057       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1101 00:41:07.283119       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 00:41:07.283150       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 00:41:07.347536       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1101 00:41:07.348717       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1101 00:41:07.349789       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1101 00:41:07.351014       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1101 00:41:07.623813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="113.648µs"
	I1101 00:41:07.647385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.630389ms"
	I1101 00:41:07.647706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.894µs"
	
	* 
	* ==> kube-controller-manager [fe4b3a8e4e007032dfc1f9ef5dc82efa471489a49ced58c617f257ff266e52a6] <==
	* E1101 00:43:31.715151       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1101 00:43:31.725600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="27.036915ms"
	E1101 00:43:31.725747       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1101 00:43:31.725722       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1101 00:43:31.729228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="13.986868ms"
	E1101 00:43:31.729307       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1101 00:43:31.729580       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1101 00:43:31.733368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="7.552405ms"
	E1101 00:43:31.733488       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1101 00:43:31.733458       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1101 00:43:31.744664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.295872ms"
	E1101 00:43:31.745759       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1101 00:43:31.745945       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1101 00:43:31.767359       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-567mr"
	I1101 00:43:31.785225       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.322077ms"
	I1101 00:43:31.787424       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-vh6ln"
	I1101 00:43:31.814480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="47.750947ms"
	I1101 00:43:31.834212       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="19.592743ms"
	I1101 00:43:31.834310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="64.713µs"
	I1101 00:43:31.837644       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.301019ms"
	I1101 00:43:31.837808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="29.94µs"
	I1101 00:43:36.788397       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.244186ms"
	I1101 00:43:36.788702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.021µs"
	I1101 00:43:38.793778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="11.824604ms"
	I1101 00:43:38.793922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="41.395µs"
	
	* 
	* ==> kube-proxy [0091d2bd9aa8ba39f51ce0e6abec12cdb175ce7567db4ad1a3ec4380d7b98b9e] <==
	* I1101 00:41:37.056693       1 server_others.go:69] "Using iptables proxy"
	I1101 00:41:37.081467       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1101 00:41:37.168031       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 00:41:37.171478       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:41:37.171526       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 00:41:37.171536       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 00:41:37.171654       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:41:37.171884       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:41:37.171902       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:41:37.174549       1 config.go:188] "Starting service config controller"
	I1101 00:41:37.174571       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:41:37.174593       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:41:37.174598       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:41:37.179282       1 config.go:315] "Starting node config controller"
	I1101 00:41:37.179304       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:41:37.275393       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:41:37.275478       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:41:37.280080       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [a3ee421b4af8acf7dd2c409dd797f9f3f38b8b30f95d4705b827bb1a17dc58ea] <==
	* I1101 00:40:54.087232       1 server_others.go:69] "Using iptables proxy"
	I1101 00:40:55.552803       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1101 00:40:55.608375       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 00:40:55.610843       1 server_others.go:152] "Using iptables Proxier"
	I1101 00:40:55.610933       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 00:40:55.610966       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 00:40:55.611079       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 00:40:55.611331       1 server.go:846] "Version info" version="v1.28.3"
	I1101 00:40:55.626059       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:40:55.627408       1 config.go:188] "Starting service config controller"
	I1101 00:40:55.627519       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 00:40:55.627573       1 config.go:97] "Starting endpoint slice config controller"
	I1101 00:40:55.627608       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 00:40:55.628606       1 config.go:315] "Starting node config controller"
	I1101 00:40:55.628660       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 00:40:55.727918       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 00:40:55.728036       1 shared_informer.go:318] Caches are synced for service config
	I1101 00:40:55.729486       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [826c05cd07f8d0d32b7f797facb98a8f93853e2eab5a01e21eb6f142fa339cf0] <==
	* I1101 00:41:34.994460       1 serving.go:348] Generated self-signed cert in-memory
	I1101 00:41:36.910601       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:41:36.910722       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:41:36.920505       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:41:36.920612       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 00:41:36.920640       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 00:41:36.920666       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:41:36.927617       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:41:36.927646       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:41:36.927667       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 00:41:36.927674       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 00:41:37.022186       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1101 00:41:37.028555       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 00:41:37.028688       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [9bf7c8b13babc00eabd2d1f4bedebaa59ed894d19b8337b09bc499c03050bacd] <==
	* I1101 00:40:53.606639       1 serving.go:348] Generated self-signed cert in-memory
	W1101 00:40:55.325841       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 00:40:55.325992       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 00:40:55.326037       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 00:40:55.326096       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:40:55.489535       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1101 00:40:55.489568       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 00:40:55.491478       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 00:40:55.491564       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 00:40:55.491595       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:40:55.491985       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 00:40:55.592488       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:41:20.906519       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1101 00:41:20.906562       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1101 00:41:20.906756       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Nov 01 00:44:16 functional-258660 kubelet[4774]: E1101 00:44:16.057816    4774 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:803c351998abcb39e6a20d90b8369f66605e2e87bb7f8e9a4f500738836404e7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 00:44:16 functional-258660 kubelet[4774]: E1101 00:44:16.057875    4774 kuberuntime_image.go:53] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:803c351998abcb39e6a20d90b8369f66605e2e87bb7f8e9a4f500738836404e7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 00:44:16 functional-258660 kubelet[4774]: E1101 00:44:16.057962    4774 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-xhp7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod sp-pod_default(1e7cc846-fdbf-
43fe-ae18-c257e016fc6b): ErrImagePull: loading manifest for target platform: reading manifest sha256:803c351998abcb39e6a20d90b8369f66605e2e87bb7f8e9a4f500738836404e7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:44:16 functional-258660 kubelet[4774]: E1101 00:44:16.058003    4774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:803c351998abcb39e6a20d90b8369f66605e2e87bb7f8e9a4f500738836404e7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="1e7cc846-fdbf-43fe-ae18-c257e016fc6b"
	Nov 01 00:44:27 functional-258660 kubelet[4774]: E1101 00:44:27.351515    4774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="1e7cc846-fdbf-43fe-ae18-c257e016fc6b"
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.531568    4774 manager.go:1106] Failed to create existing container: /crio-172532e73b254a2592c67707583cfc6d35c926661d7ffc0e875f8d7e7b14a0fb: Error finding container 172532e73b254a2592c67707583cfc6d35c926661d7ffc0e875f8d7e7b14a0fb: Status 404 returned error can't find the container with id 172532e73b254a2592c67707583cfc6d35c926661d7ffc0e875f8d7e7b14a0fb
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.533028    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-172532e73b254a2592c67707583cfc6d35c926661d7ffc0e875f8d7e7b14a0fb: Error finding container 172532e73b254a2592c67707583cfc6d35c926661d7ffc0e875f8d7e7b14a0fb: Status 404 returned error can't find the container with id 172532e73b254a2592c67707583cfc6d35c926661d7ffc0e875f8d7e7b14a0fb
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.534208    4774 manager.go:1106] Failed to create existing container: /crio-614c1797e0000dee47ac844a9301945e959b29470668b9e210ca9422faca5f3d: Error finding container 614c1797e0000dee47ac844a9301945e959b29470668b9e210ca9422faca5f3d: Status 404 returned error can't find the container with id 614c1797e0000dee47ac844a9301945e959b29470668b9e210ca9422faca5f3d
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.535877    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-bf8a4324c2dddcca9cef84efe6ef077011e5fd34e6061de8fe0796853e3114f0: Error finding container bf8a4324c2dddcca9cef84efe6ef077011e5fd34e6061de8fe0796853e3114f0: Status 404 returned error can't find the container with id bf8a4324c2dddcca9cef84efe6ef077011e5fd34e6061de8fe0796853e3114f0
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.536550    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-a2b4dfaf7cf3fa2ac9f6bf66abd090a881c57da2eb71110bb66bf2c876828c1a: Error finding container a2b4dfaf7cf3fa2ac9f6bf66abd090a881c57da2eb71110bb66bf2c876828c1a: Status 404 returned error can't find the container with id a2b4dfaf7cf3fa2ac9f6bf66abd090a881c57da2eb71110bb66bf2c876828c1a
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.536845    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-614c1797e0000dee47ac844a9301945e959b29470668b9e210ca9422faca5f3d: Error finding container 614c1797e0000dee47ac844a9301945e959b29470668b9e210ca9422faca5f3d: Status 404 returned error can't find the container with id 614c1797e0000dee47ac844a9301945e959b29470668b9e210ca9422faca5f3d
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.537113    4774 manager.go:1106] Failed to create existing container: /crio-fa79db6248f048c0fcbdfc9666c91bb1c587c067a6f6f728ee873cca541ee9a9: Error finding container fa79db6248f048c0fcbdfc9666c91bb1c587c067a6f6f728ee873cca541ee9a9: Status 404 returned error can't find the container with id fa79db6248f048c0fcbdfc9666c91bb1c587c067a6f6f728ee873cca541ee9a9
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.537420    4774 manager.go:1106] Failed to create existing container: /crio-a2b4dfaf7cf3fa2ac9f6bf66abd090a881c57da2eb71110bb66bf2c876828c1a: Error finding container a2b4dfaf7cf3fa2ac9f6bf66abd090a881c57da2eb71110bb66bf2c876828c1a: Status 404 returned error can't find the container with id a2b4dfaf7cf3fa2ac9f6bf66abd090a881c57da2eb71110bb66bf2c876828c1a
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.538921    4774 manager.go:1106] Failed to create existing container: /crio-1ac48457387363ce08f6164bfec037359f38636a85c06a9d60a3a11aae48c62a: Error finding container 1ac48457387363ce08f6164bfec037359f38636a85c06a9d60a3a11aae48c62a: Status 404 returned error can't find the container with id 1ac48457387363ce08f6164bfec037359f38636a85c06a9d60a3a11aae48c62a
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.539429    4774 manager.go:1106] Failed to create existing container: /crio-f17ab81187244c2f5f8261919cf397dc312b44328201ebbbc1c5a0ecbcd3e9d7: Error finding container f17ab81187244c2f5f8261919cf397dc312b44328201ebbbc1c5a0ecbcd3e9d7: Status 404 returned error can't find the container with id f17ab81187244c2f5f8261919cf397dc312b44328201ebbbc1c5a0ecbcd3e9d7
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.539631    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-fa79db6248f048c0fcbdfc9666c91bb1c587c067a6f6f728ee873cca541ee9a9: Error finding container fa79db6248f048c0fcbdfc9666c91bb1c587c067a6f6f728ee873cca541ee9a9: Status 404 returned error can't find the container with id fa79db6248f048c0fcbdfc9666c91bb1c587c067a6f6f728ee873cca541ee9a9
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.539783    4774 manager.go:1106] Failed to create existing container: /crio-bf8a4324c2dddcca9cef84efe6ef077011e5fd34e6061de8fe0796853e3114f0: Error finding container bf8a4324c2dddcca9cef84efe6ef077011e5fd34e6061de8fe0796853e3114f0: Status 404 returned error can't find the container with id bf8a4324c2dddcca9cef84efe6ef077011e5fd34e6061de8fe0796853e3114f0
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.539945    4774 manager.go:1106] Failed to create existing container: /crio-ebeee5a4cc3ee06a557361066b184c6ec8503e0f4e925c8108ca50d2a53c0b7b: Error finding container ebeee5a4cc3ee06a557361066b184c6ec8503e0f4e925c8108ca50d2a53c0b7b: Status 404 returned error can't find the container with id ebeee5a4cc3ee06a557361066b184c6ec8503e0f4e925c8108ca50d2a53c0b7b
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.540203    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-083f405f0a3309eaa3a31f370374a601f48fb635289e36b1d3a655af7b8d75c3: Error finding container 083f405f0a3309eaa3a31f370374a601f48fb635289e36b1d3a655af7b8d75c3: Status 404 returned error can't find the container with id 083f405f0a3309eaa3a31f370374a601f48fb635289e36b1d3a655af7b8d75c3
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.542416    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-f17ab81187244c2f5f8261919cf397dc312b44328201ebbbc1c5a0ecbcd3e9d7: Error finding container f17ab81187244c2f5f8261919cf397dc312b44328201ebbbc1c5a0ecbcd3e9d7: Status 404 returned error can't find the container with id f17ab81187244c2f5f8261919cf397dc312b44328201ebbbc1c5a0ecbcd3e9d7
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.542785    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-ebeee5a4cc3ee06a557361066b184c6ec8503e0f4e925c8108ca50d2a53c0b7b: Error finding container ebeee5a4cc3ee06a557361066b184c6ec8503e0f4e925c8108ca50d2a53c0b7b: Status 404 returned error can't find the container with id ebeee5a4cc3ee06a557361066b184c6ec8503e0f4e925c8108ca50d2a53c0b7b
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.543023    4774 manager.go:1106] Failed to create existing container: /docker/76e9c18a9847e3dfe6cdc2a49f212e2d669bb3f8f952ab1824179360fb84c3c1/crio-1ac48457387363ce08f6164bfec037359f38636a85c06a9d60a3a11aae48c62a: Error finding container 1ac48457387363ce08f6164bfec037359f38636a85c06a9d60a3a11aae48c62a: Status 404 returned error can't find the container with id 1ac48457387363ce08f6164bfec037359f38636a85c06a9d60a3a11aae48c62a
	Nov 01 00:44:31 functional-258660 kubelet[4774]: E1101 00:44:31.543230    4774 manager.go:1106] Failed to create existing container: /crio-083f405f0a3309eaa3a31f370374a601f48fb635289e36b1d3a655af7b8d75c3: Error finding container 083f405f0a3309eaa3a31f370374a601f48fb635289e36b1d3a655af7b8d75c3: Status 404 returned error can't find the container with id 083f405f0a3309eaa3a31f370374a601f48fb635289e36b1d3a655af7b8d75c3
	Nov 01 00:44:38 functional-258660 kubelet[4774]: E1101 00:44:38.350451    4774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="1e7cc846-fdbf-43fe-ae18-c257e016fc6b"
	Nov 01 00:44:51 functional-258660 kubelet[4774]: E1101 00:44:51.354997    4774 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="1e7cc846-fdbf-43fe-ae18-c257e016fc6b"
	
	* 
	* ==> kubernetes-dashboard [c1a4d1df20141d6e3adaa5998f14d633c0a40d8098cc0d0ed6a793a102ac4732] <==
	* 2023/11/01 00:43:36 Using namespace: kubernetes-dashboard
	2023/11/01 00:43:36 Using in-cluster config to connect to apiserver
	2023/11/01 00:43:36 Using secret token for csrf signing
	2023/11/01 00:43:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/01 00:43:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/01 00:43:36 Successful initial request to the apiserver, version: v1.28.3
	2023/11/01 00:43:36 Generating JWE encryption key
	2023/11/01 00:43:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/01 00:43:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/01 00:43:36 Initializing JWE encryption key from synchronized object
	2023/11/01 00:43:36 Creating in-cluster Sidecar client
	2023/11/01 00:43:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/01 00:43:36 Serving insecurely on HTTP port: 9090
	2023/11/01 00:44:06 Successful request to sidecar
	2023/11/01 00:43:36 Starting overwatch
	
	* 
	* ==> storage-provisioner [2bca3f804b504463b7590f0bf39f583f28561938c1196ed5246d656126e3df2d] <==
	* I1101 00:41:37.088445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 00:41:37.118419       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 00:41:37.118552       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 00:41:54.556389       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 00:41:54.556891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1c4aac37-f723-41a0-821d-8e26f964238d", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-258660_babfa25a-2a7b-44c1-9784-12ace6118346 became leader
	I1101 00:41:54.558031       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-258660_babfa25a-2a7b-44c1-9784-12ace6118346!
	I1101 00:41:54.658793       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-258660_babfa25a-2a7b-44c1-9784-12ace6118346!
	I1101 00:42:07.618821       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1101 00:42:07.620056       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"57107b2b-caea-4477-a777-1281fda6f1d9", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1101 00:42:07.619911       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2b6c1602-2cf5-4451-851a-42d7178000f3 403 0 2023-11-01 00:40:21 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-11-01 00:40:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-57107b2b-caea-4477-a777-1281fda6f1d9 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  57107b2b-caea-4477-a777-1281fda6f1d9 721 0 2023-11-01 00:42:07 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-11-01 00:42:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-11-01 00:42:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1101 00:42:07.649805       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-57107b2b-caea-4477-a777-1281fda6f1d9" provisioned
	I1101 00:42:07.649866       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1101 00:42:07.649940       1 volume_store.go:212] Trying to save persistentvolume "pvc-57107b2b-caea-4477-a777-1281fda6f1d9"
	I1101 00:42:07.697558       1 volume_store.go:219] persistentvolume "pvc-57107b2b-caea-4477-a777-1281fda6f1d9" saved
	I1101 00:42:07.697861       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"57107b2b-caea-4477-a777-1281fda6f1d9", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-57107b2b-caea-4477-a777-1281fda6f1d9
	
	* 
	* ==> storage-provisioner [a5fdfc7a9e8ab6266d3cf7b0111ec950ee96f9048431d07bb90780693e954e7b] <==
	* I1101 00:41:20.359157       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 00:41:20.385521       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 00:41:20.385673       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-258660 -n functional-258660
helpers_test.go:261: (dbg) Run:  kubectl --context functional-258660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-258660 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-258660 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258660/192.168.49.2
	Start Time:       Wed, 01 Nov 2023 00:43:03 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://30dde43e6bfe9cace477affdfcabdc4dd846b0415f05aae7ca2f698fbfb45b84
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 01 Nov 2023 00:43:21 +0000
	      Finished:     Wed, 01 Nov 2023 00:43:21 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wj8pp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wj8pp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2m8s  default-scheduler  Successfully assigned default/busybox-mount to functional-258660
	  Normal  Pulling    2m8s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     110s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.742s (17.887s including waiting)
	  Normal  Created    110s  kubelet            Created container mount-munger
	  Normal  Started    110s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-258660/192.168.49.2
	Start Time:       Wed, 01 Nov 2023 00:42:07 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xhp7m (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xhp7m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m3s                  default-scheduler  Successfully assigned default/sp-pod to functional-258660
	  Warning  Failed     112s (x2 over 2m33s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s (x3 over 2m33s)   kubelet            Error: ErrImagePull
	  Warning  Failed     55s                   kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:803c351998abcb39e6a20d90b8369f66605e2e87bb7f8e9a4f500738836404e7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    20s (x5 over 2m33s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     20s (x5 over 2m33s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x4 over 3m3s)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.19s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-992876 addons enable ingress --alsologtostderr -v=5
E1101 00:47:02.259167 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.264471 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.274731 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.294980 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.335277 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.415629 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.576002 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:02.896843 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:03.537068 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:04.817787 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:07.378886 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:12.499101 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:22.739973 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:47:43.220869 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:48:24.181915 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:49:46.102091 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:50:00.144721 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:52:02.259317 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 00:52:29.942300 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-992876 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m0.978218858s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:46:51.827836 1234483 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:46:51.828512 1234483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:46:51.828546 1234483 out.go:309] Setting ErrFile to fd 2...
	I1101 00:46:51.828567 1234483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:46:51.828870 1234483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:46:51.829292 1234483 mustload.go:65] Loading cluster: ingress-addon-legacy-992876
	I1101 00:46:51.829732 1234483 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:46:51.829803 1234483 addons.go:594] checking whether the cluster is paused
	I1101 00:46:51.829932 1234483 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:46:51.829968 1234483 host.go:66] Checking if "ingress-addon-legacy-992876" exists ...
	I1101 00:46:51.830564 1234483 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:51.847841 1234483 ssh_runner.go:195] Run: systemctl --version
	I1101 00:46:51.847902 1234483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:46:51.865318 1234483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:46:51.966624 1234483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:46:51.966727 1234483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:46:52.010013 1234483 cri.go:89] found id: "1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da"
	I1101 00:46:52.010036 1234483 cri.go:89] found id: "0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972"
	I1101 00:46:52.010042 1234483 cri.go:89] found id: "2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de"
	I1101 00:46:52.010046 1234483 cri.go:89] found id: "1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb"
	I1101 00:46:52.010050 1234483 cri.go:89] found id: "39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12"
	I1101 00:46:52.010055 1234483 cri.go:89] found id: "8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd"
	I1101 00:46:52.010059 1234483 cri.go:89] found id: "8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919"
	I1101 00:46:52.010064 1234483 cri.go:89] found id: "a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7"
	I1101 00:46:52.010069 1234483 cri.go:89] found id: ""
	I1101 00:46:52.010117 1234483 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 00:46:52.039684 1234483 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972","pid":2264,"status":"running","bundle":"/run/containers/storage/overlay-containers/0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972/userdata","rootfs":"/var/lib/containers/storage/overlay/45c7e4ec2722c17ef581ab0d1b790242c28b54831f2256cb06971e49a2414365/merged","created":"2023-11-01T00:46:47.373979735Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5145e2","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.c
ri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5145e2\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:46:47.322174582Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o
.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-447wp\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"bd34668c-987e-41fe-8236-9e2c434eee33\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-447wp_bd34668c-987e-41fe-8236-9e2c434eee33/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/45c7e4ec2722c17ef581ab0d1b790242c28b54831f2256cb06971e49a2414365/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-447wp_kube-system_bd34668c-987e-41fe-8236-9e2c434eee33_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7dda86566ca3c5bf3e86cfe32358047458094a0b91546d80cdf72ac108da8afd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7dda86566ca3c5bf3e86cfe32358047458094a0b91546d80cd
f72ac108da8afd","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-447wp_kube-system_bd34668c-987e-41fe-8236-9e2c434eee33_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/bd34668c-987e-41fe-8236-9e2c434eee33/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/bd34668c-987e-41fe-8236-9e2c434eee33/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/bd34668c-987e-41fe-8236-9e2c434eee33/containers/coredns/451fe1fb\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\"
:\"/var/lib/kubelet/pods/bd34668c-987e-41fe-8236-9e2c434eee33/volumes/kubernetes.io~secret/coredns-token-445rh\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-447wp","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"bd34668c-987e-41fe-8236-9e2c434eee33","kubernetes.io/config.seen":"2023-11-01T00:46:46.961064879Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da","pid":2341,"status":"running","bundle":"/run/containers/storage/overlay-containers/1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da/userdata","rootfs":"/var/lib/containers/storage/overlay/8480fdf0d22b8922e601640a223c5e494742da3844a08bb6adcaac727a4804ab/merged","created":"2023-11-01T00:46:51.632171283Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ec7cf7a2","io.kubernetes.cont
ainer.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ec7cf7a2\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:46:51.577884851Z","io.kubernetes.cri-o.Image":"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b
0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b090f608-18cc-4c75-b85f-08c99204530c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_b090f608-18cc-4c75-b85f-08c99204530c/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/8480fdf0d22b8922e601640a223c5e494742da3844a08bb6adcaac727a4804ab/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_b090f608-18cc-4c75-b85f-08c99204530c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a6c9d905b3c1dc093babad950c212993905298413ab9153e801151daa26845f2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a6c9d905b3c1dc093babad950c212993905298413ab9153e801151daa26845
f2","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_b090f608-18cc-4c75-b85f-08c99204530c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b090f608-18cc-4c75-b85f-08c99204530c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b090f608-18cc-4c75-b85f-08c99204530c/containers/storage-provisioner/47c5341b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b090f608-18cc-4c75-b85f-08c99204530c/volumes/kubernetes.io~secret/storage-provisioner-t
oken-89zld\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b090f608-18cc-4c75-b85f-08c99204530c","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"
2023-11-01T00:46:46.958224080Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb","pid":2036,"status":"running","bundle":"/run/containers/storage/overlay-containers/1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb/userdata","rootfs":"/var/lib/containers/storage/overlay/c690f244939dc7dd18bdb810efe91f12c22473fc226eadb87d25b857a02ef9e0/merged","created":"2023-11-01T00:46:25.163738528Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"947b023b","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"947b023b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubern
etes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:46:25.097810901Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageRef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-qxwkc\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f519b66a-24e3-4796-bbab-a043a2e7104f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-qxwkc_f519b66a-24e3-4796-bbab-a043a2e7104f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cr
i-o.MountPoint":"/var/lib/containers/storage/overlay/c690f244939dc7dd18bdb810efe91f12c22473fc226eadb87d25b857a02ef9e0/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-qxwkc_kube-system_f519b66a-24e3-4796-bbab-a043a2e7104f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0ab78bb81fc11e1030858d74ceac0c37a4e770675ce379a2102d14c081558c62/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0ab78bb81fc11e1030858d74ceac0c37a4e770675ce379a2102d14c081558c62","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-qxwkc_kube-system_f519b66a-24e3-4796-bbab-a043a2e7104f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f519b66a-24e3-4796-bbab-a043a2e7104f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f519b66a-24e3-4796-bbab-a043a2e7104f/containers/kube-proxy/025d6226\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/f519b66a-24e3-4796-bbab-a043a2e7104f/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f519b66a-24e3-4796-bbab-a043a2e7104f/volumes/kubernetes.io~secret/kube-proxy-token-srh9l\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-qxwkc","io.kubernetes.pod.namespace":"kube-system
","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f519b66a-24e3-4796-bbab-a043a2e7104f","kubernetes.io/config.seen":"2023-11-01T00:46:24.733416816Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de","pid":2149,"status":"running","bundle":"/run/containers/storage/overlay-containers/2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de/userdata","rootfs":"/var/lib/containers/storage/overlay/31f2db5907be9cdce3ca3353459b6f2bf6bfa1252e9ceb4a970351a3ed372f0d/merged","created":"2023-11-01T00:46:27.174722386Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"593ce354","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"59
3ce354\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:46:27.124486099Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-d4npj\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"14459195-556d-40fb-a096-0a434c3c0177\"}","io.kubernetes.
cri-o.LogPath":"/var/log/pods/kube-system_kindnet-d4npj_14459195-556d-40fb-a096-0a434c3c0177/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/31f2db5907be9cdce3ca3353459b6f2bf6bfa1252e9ceb4a970351a3ed372f0d/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-d4npj_kube-system_14459195-556d-40fb-a096-0a434c3c0177_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d70f9e4820a4e3d08a3d436019e799e5efc0461c0c5f3186c79dcff02cc6a583/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d70f9e4820a4e3d08a3d436019e799e5efc0461c0c5f3186c79dcff02cc6a583","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-d4npj_kube-system_14459195-556d-40fb-a096-0a434c3c0177_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"h
ost_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/14459195-556d-40fb-a096-0a434c3c0177/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/14459195-556d-40fb-a096-0a434c3c0177/containers/kindnet-cni/8ff7afd9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/14459195-556d-40fb-a096-0a434c3c0177/volumes/kubernetes.io~secret/kindnet-token-s2pj9\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":fals
e}]","io.kubernetes.pod.name":"kindnet-d4npj","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"14459195-556d-40fb-a096-0a434c3c0177","kubernetes.io/config.seen":"2023-11-01T00:46:24.753982295Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12","pid":1559,"status":"running","bundle":"/run/containers/storage/overlay-containers/39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12/userdata","rootfs":"/var/lib/containers/storage/overlay/96b03f2ea13da1bdf163587e773eeb2f1d4aaf5952dd31fbc9cbbc09884b5f17/merged","created":"2023-11-01T00:46:00.056410477Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.termin
ationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:45:59.974295218Z","io.kubernetes.cri-o.Image":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-992876\",\"io.kubernetes.po
d.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-992876_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/96b03f2ea13da1bdf163587e773eeb2f1d4aaf5952dd31fbc9cbbc09884b5f17/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-992876_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/541d6044ef64cb8b74c113120a056b70728ed2bbaed0264fc368d5dffb2bb766/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"541d6044ef64cb8b74c113120a056b70728ed2bbaed0264fc368d5dffb2bb766","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-992876_kube-system_49b043cd68fd30a453b
df128db5271f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/cc2e51f7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonl
y\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-992876","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a4
53bdf128db5271f3","kubernetes.io/config.seen":"2023-11-01T00:45:55.634039996Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd","pid":1525,"status":"running","bundle":"/run/containers/storage/overlay-containers/8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd/userdata","rootfs":"/var/lib/containers/storage/overlay/f7c6be2a2b10d1c2a61d9b0dc67b8667b51b8509b9a4f1c52560d46ad45394ab/merged","created":"2023-11-01T00:45:59.969560795Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminati
onMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:45:59.883714232Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-992876\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-992876_d12e497b0008e22acbcd5a9
cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f7c6be2a2b10d1c2a61d9b0dc67b8667b51b8509b9a4f1c52560d46ad45394ab/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-992876_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e1b9ca063c0a52134e92ab3faa988b8ffee3313b2b41b054e17ba328da77af24/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e1b9ca063c0a52134e92ab3faa988b8ffee3313b2b41b054e17ba328da77af24","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-992876_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/p
ods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/containers/kube-scheduler/00a4eeb7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-992876","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-11-01T00:45:55.636154377Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919","pid":1508,"status":"running
","bundle":"/run/containers/storage/overlay-containers/8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919/userdata","rootfs":"/var/lib/containers/storage/overlay/3e70fb115dc4fa9226958721b4238cc4d0d5e6c020432da58b8bb70dc198c9ea/merged","created":"2023-11-01T00:45:59.941564527Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"c12cce18","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"c12cce18\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919","io.kuberne
tes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:45:59.859874873Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-992876\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b7fe6983a0b606e47264cb47cd5c97b1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-992876_b7fe6983a0b606e47264cb47cd5c97b1/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3e70fb115dc4fa9226958721b4238cc4d0d5e6c020432da58b8bb70dc198c9ea/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-992876_kube-system_b7fe6983a0
b606e47264cb47cd5c97b1_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/987bd08ca5697ba8a5c5c3c20de50a46a08f3a9da17beda387b42f6bbbd6b01c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"987bd08ca5697ba8a5c5c3c20de50a46a08f3a9da17beda387b42f6bbbd6b01c","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-992876_kube-system_b7fe6983a0b606e47264cb47cd5c97b1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b7fe6983a0b606e47264cb47cd5c97b1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b7fe6983a0b606e47264cb47cd5c97b1/containers/etcd/47333fba\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/
etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-992876","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b7fe6983a0b606e47264cb47cd5c97b1","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b7fe6983a0b606e47264cb47cd5c97b1","kubernetes.io/config.seen":"2023-11-01T00:45:55.637629008Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7","pid":1467,"status":"running","bundle":"/run/containers/storage/overlay-containers/a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7/userdata","rootfs":"/var/lib/con
tainers/storage/overlay/58cd5aa74474b8d74b84d04efeb2d6ed6380d2f0d1b32cdb438af50926f2fe4a/merged","created":"2023-11-01T00:45:59.816621116Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-01T00:45:59.773505704Z","io.kubernetes.cri-o.Image":"2694cf044d665
91c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-992876\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-992876_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/58cd5aa74474b8d74b84d04efeb2d6ed6380d2f0d1b32cdb438af50926f2fe4a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-992876_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/
run/containers/storage/overlay-containers/15fb43877ef744beaeafd4d85f34540e49a07b7327e06801038120b62765c06f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"15fb43877ef744beaeafd4d85f34540e49a07b7327e06801038120b62765c06f","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress-addon-legacy-992876_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/d398a780\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\
",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-992876","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78
b40af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-11-01T00:45:55.627486083Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1101 00:46:52.040346 1234483 cri.go:126] list returned 8 containers
	I1101 00:46:52.040358 1234483 cri.go:129] container: {ID:0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972 Status:running}
	I1101 00:46:52.040381 1234483 cri.go:135] skipping {0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972 running}: state = "running", want "paused"
	I1101 00:46:52.040394 1234483 cri.go:129] container: {ID:1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da Status:running}
	I1101 00:46:52.040401 1234483 cri.go:135] skipping {1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da running}: state = "running", want "paused"
	I1101 00:46:52.040407 1234483 cri.go:129] container: {ID:1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb Status:running}
	I1101 00:46:52.040416 1234483 cri.go:135] skipping {1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb running}: state = "running", want "paused"
	I1101 00:46:52.040429 1234483 cri.go:129] container: {ID:2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de Status:running}
	I1101 00:46:52.040436 1234483 cri.go:135] skipping {2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de running}: state = "running", want "paused"
	I1101 00:46:52.040445 1234483 cri.go:129] container: {ID:39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12 Status:running}
	I1101 00:46:52.040451 1234483 cri.go:135] skipping {39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12 running}: state = "running", want "paused"
	I1101 00:46:52.040457 1234483 cri.go:129] container: {ID:8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd Status:running}
	I1101 00:46:52.040464 1234483 cri.go:135] skipping {8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd running}: state = "running", want "paused"
	I1101 00:46:52.040475 1234483 cri.go:129] container: {ID:8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919 Status:running}
	I1101 00:46:52.040484 1234483 cri.go:135] skipping {8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919 running}: state = "running", want "paused"
	I1101 00:46:52.040491 1234483 cri.go:129] container: {ID:a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7 Status:running}
	I1101 00:46:52.040501 1234483 cri.go:135] skipping {a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7 running}: state = "running", want "paused"
	I1101 00:46:52.043097 1234483 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1101 00:46:52.045104 1234483 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:46:52.045123 1234483 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-992876"
	I1101 00:46:52.045132 1234483 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-992876"
	I1101 00:46:52.045170 1234483 host.go:66] Checking if "ingress-addon-legacy-992876" exists ...
	I1101 00:46:52.045612 1234483 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:52.064978 1234483 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1101 00:46:52.067184 1234483 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1101 00:46:52.069383 1234483 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1101 00:46:52.071438 1234483 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 00:46:52.071482 1234483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1101 00:46:52.071553 1234483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:46:52.088634 1234483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:46:52.197986 1234483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 00:46:52.700726 1234483 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-992876"
	I1101 00:46:52.702415 1234483 out.go:177] * Verifying ingress addon...
	I1101 00:46:52.705166 1234483 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:52.706275 1234483 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 00:46:52.706676 1234483 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:46:52.726573 1234483 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 00:46:52.726600 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:52.730031 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:53.234068 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:53.734479 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:54.233736 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:54.735172 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:55.234441 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:55.734645 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:56.233925 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:56.734370 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:57.235146 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:57.734525 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:58.233828 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:58.733893 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:59.233874 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:46:59.733993 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:00.233996 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:00.733823 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:01.234013 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:01.733963 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:02.234830 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:02.734399 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:03.234357 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:03.734845 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:04.234134 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:04.734437 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:05.234681 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:05.734406 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:06.233894 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:06.733970 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:07.234480 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:07.734443 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:08.234800 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:08.733844 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:09.233669 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:09.733851 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:10.234403 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:10.734886 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:11.234075 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:11.733921 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:12.234644 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:12.733839 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:13.233729 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:13.733881 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:14.233913 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:14.734021 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:15.234321 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:15.734753 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:16.234224 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:16.734851 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:17.234555 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:17.734884 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:18.234308 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:18.734787 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:19.234086 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:19.735497 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:20.233773 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:20.734099 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:21.234245 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:21.734511 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:22.234119 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:22.734688 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:23.233866 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:23.734000 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:24.234127 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:24.734376 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:25.234731 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:25.733997 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:26.233910 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:26.733861 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:27.234408 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:27.734803 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:28.234099 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:28.733918 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:29.233976 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:29.734180 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:30.234461 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:30.734641 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:31.233876 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:31.733824 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:32.234434 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:32.733780 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:33.234393 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:33.734548 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:34.234036 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:34.734357 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:35.234743 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:35.733970 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:36.234621 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:36.733853 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:37.233896 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:37.734319 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:38.234559 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:38.733808 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:39.233817 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:39.734011 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:40.234400 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:40.734733 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:41.233963 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:41.733898 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:42.234748 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:42.734095 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:43.234754 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:43.733849 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:44.234162 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:44.734644 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:45.234198 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:45.734722 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:46.233970 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:46.733939 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:47.234421 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:47.735102 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:48.233951 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:48.734044 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:49.234351 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:49.734938 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:50.234066 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:50.733993 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:51.234500 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:51.733819 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:52.234752 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:52.735487 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:53.233790 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:53.734048 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:54.234520 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:54.733700 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:55.233801 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:55.734044 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:56.234396 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:56.735053 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:57.234549 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:57.735448 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:58.235110 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:58.734027 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:59.234339 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:47:59.734699 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:00.234475 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:00.734684 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:01.233740 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:01.734033 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:02.234990 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:02.734456 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:03.234667 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:03.734064 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:04.234816 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:04.733932 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:05.234018 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:05.733964 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:06.234268 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:06.734623 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:07.234080 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:07.734995 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:08.233848 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:08.734103 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:09.233971 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:09.734317 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:10.234505 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:10.733910 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:11.233916 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:11.734062 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:12.234742 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:12.734435 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:13.234862 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:13.734013 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:14.234123 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:14.734863 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:15.234140 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:15.734620 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:16.234099 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:16.733942 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:17.234550 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:17.734163 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:18.234985 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:18.733952 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:19.234287 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:19.734252 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:20.234418 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:20.734343 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:21.234759 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:21.733938 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:22.234598 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:22.734926 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:23.233914 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:23.734215 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:24.234727 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:24.733959 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:25.239974 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:25.733969 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:26.233708 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:26.733694 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:27.234450 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:27.735352 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:28.234568 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:28.733908 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:29.233897 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:29.733619 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:30.233897 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:30.734150 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:31.234652 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:31.733925 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:32.234944 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:32.734421 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:33.234819 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:33.734464 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:34.234924 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:34.733745 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:35.233851 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:35.733817 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:36.234325 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:36.734608 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:37.233924 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:37.734835 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:38.234003 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:38.733826 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:39.233906 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:39.734151 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:40.234737 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:40.733834 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:41.233845 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:41.733996 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:42.234738 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:42.734066 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:43.234195 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:43.734055 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:44.234496 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:44.733732 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:45.234059 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:45.734438 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:46.234289 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:46.734620 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:47.233699 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:47.735156 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:48.234306 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:48.734473 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:49.234408 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:49.734635 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:50.234026 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:50.733918 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:51.233888 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:51.733910 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:52.234360 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:52.734727 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:53.233819 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:53.734148 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:54.234565 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:54.733719 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:55.233823 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:55.734085 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:56.234173 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:56.734543 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:57.233848 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:57.734860 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:58.233800 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:58.734343 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:59.234601 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:48:59.733852 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:00.234226 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:00.734507 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:01.233680 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:01.733857 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:02.234701 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:02.734753 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:03.234622 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:03.733628 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:04.235077 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:04.734451 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:05.233874 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:05.734121 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:06.234348 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:06.734710 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:07.233956 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:07.734154 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:08.234339 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:08.734936 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:09.234395 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:09.734944 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:10.233915 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:10.734246 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:11.234116 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:11.734625 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:12.234512 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:12.734766 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:13.233911 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:13.733893 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:14.233705 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:14.733849 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:15.233973 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:15.734617 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:16.233971 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:16.733734 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:17.234718 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:17.734353 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:18.234517 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:18.734428 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:19.234382 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:19.734490 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:20.235036 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:20.733825 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:21.233892 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:21.733840 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:22.234705 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:22.735385 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:23.234701 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:23.733774 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:24.234253 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:24.734433 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:25.234679 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:25.733806 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:26.233859 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:26.734280 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:27.234802 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:27.734685 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:28.233779 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:28.734105 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:29.234652 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:29.733937 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:30.234083 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:30.734542 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:31.234034 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:31.734053 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:32.234822 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:32.734018 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:33.233948 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:33.733957 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:34.235089 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:34.734438 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:35.234787 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:35.733961 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:36.234285 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:36.734475 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:37.234086 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:37.735580 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:38.233761 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:38.733783 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:39.234057 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:39.733683 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:40.233885 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:40.734176 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:41.234639 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:41.734796 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:42.234620 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:42.734021 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:43.234046 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:43.734274 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:44.234585 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:44.733663 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:45.234315 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:45.734497 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:46.234855 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:46.733907 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:47.234590 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:47.735244 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:48.234531 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:48.733653 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:49.233687 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:49.733741 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:50.234047 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:50.733759 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:51.233776 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:51.734145 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:52.234617 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:52.733995 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:53.233732 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:53.734011 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:54.234518 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:54.734094 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:55.234306 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:55.734787 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:56.233831 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:56.733829 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:57.234343 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:57.735639 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:58.233873 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:58.733948 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:59.233750 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:49:59.733804 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:00.234076 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:00.734187 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:01.234386 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:01.734856 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:02.234664 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:02.734624 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:03.233663 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:03.733889 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:04.233729 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:04.733694 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:05.234233 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:05.734381 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:06.234720 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:06.733667 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:07.234360 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:07.734499 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:08.234676 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:08.734606 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:09.233845 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:09.734214 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:10.234632 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:10.739021 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:11.234021 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:11.734013 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:12.234385 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:12.734640 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:13.233626 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:13.733748 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:14.233797 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:14.734138 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:15.234624 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:15.733779 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:16.234199 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:16.734796 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:17.233978 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:17.735446 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:18.234778 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:18.734393 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:19.234677 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:19.734909 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:20.233816 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:20.733819 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:21.234080 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:21.734259 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:22.233864 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:22.734173 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:23.234337 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:23.734463 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:24.234759 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:24.734497 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:25.234991 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:25.733773 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:26.233908 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:26.734451 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:27.233914 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:27.735324 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:28.235051 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:28.734637 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:29.234021 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:29.734263 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:30.234584 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:30.733730 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:31.234703 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:31.733967 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:32.234872 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:32.734239 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:33.234633 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:33.733640 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:34.233804 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:34.733731 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:35.234272 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:35.734696 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:36.234273 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:36.734936 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:37.234373 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:37.735475 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:38.235143 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:38.734592 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:39.233814 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:39.733678 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:40.234301 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:40.734938 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:41.234042 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:41.734481 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:42.235517 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:42.733847 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:43.238432 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:43.734375 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:44.234660 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:44.733924 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:45.234426 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:45.734906 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:46.234032 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:46.734065 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:47.234542 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:47.735471 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:48.234775 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:48.734084 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:49.234387 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:49.734764 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:50.233792 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:50.734525 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:51.233643 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:51.733933 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:52.234596 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:52.734582 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:53.233765 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:53.734369 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:54.234782 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:54.733802 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:55.233979 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:55.734486 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:56.233888 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:56.734343 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:57.233805 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:57.734292 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:58.234707 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:58.733907 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:59.233853 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:50:59.734337 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:00.234753 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:00.734626 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:01.233829 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:01.733963 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:02.234585 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:02.734575 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:03.234025 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:03.733793 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:04.233807 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:04.733815 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:05.234038 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:05.733983 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:06.233762 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:06.733727 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:07.234021 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:07.734893 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:08.234083 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:08.734334 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:09.234367 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:09.734563 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:10.233702 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:10.733832 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:11.233915 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:11.734373 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:12.233928 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:12.734464 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:13.234627 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:13.733932 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:14.233929 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:14.734209 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:15.234619 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:15.733904 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:16.233798 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:16.733995 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:17.234885 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:17.735414 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:18.234636 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:18.734457 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:19.236206 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:19.736130 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:20.234167 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:20.734577 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:21.233867 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:21.734208 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:22.234583 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:22.734809 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:23.233961 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:23.734401 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:24.234750 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:24.734212 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:25.234409 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:25.734649 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:26.233758 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:26.733910 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:27.234355 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:27.734628 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:28.233779 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:28.733932 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:29.233689 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:29.733987 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:30.234369 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:30.734585 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:31.234147 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:31.734461 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:32.234093 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:32.734644 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:33.234217 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:33.734574 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:34.233756 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:34.734000 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:35.234209 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:35.734463 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:36.234788 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:36.733723 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:37.234118 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:37.734645 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:38.233889 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:38.733897 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:39.233999 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:39.734030 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:40.234403 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:40.734771 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:41.234088 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:41.734855 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:42.234107 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:42.734779 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:43.233885 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:43.733764 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:44.233984 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:44.734273 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:45.234799 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:45.733803 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:46.233926 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:46.733859 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:47.234473 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:47.734488 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:48.233738 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:48.733989 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:49.233947 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:49.733774 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:50.233809 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:50.733866 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:51.234112 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:51.734278 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:52.234099 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:52.736657 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:53.233890 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:53.733814 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:54.233903 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:54.734739 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:55.234266 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:55.734566 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:56.233861 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:56.734011 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:57.234580 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:57.735115 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:58.234512 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:58.734844 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:59.233954 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:51:59.733842 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:00.234168 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:00.734483 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:01.235064 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:01.734370 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:02.234002 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:02.734465 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:03.234834 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:03.734136 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:04.234642 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:04.733848 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:05.234412 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:05.734962 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:06.234123 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:06.734036 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:07.234484 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:07.735331 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:08.234730 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:08.733947 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:09.234171 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:09.734489 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:10.233817 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:10.734292 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:11.234736 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:11.733811 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:12.234575 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:12.734502 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:13.234667 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:13.733881 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:14.233875 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:14.734153 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:15.234483 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:15.734780 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:16.233821 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:16.733761 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:17.234150 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:17.734838 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:18.234147 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:18.734541 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:19.233963 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:19.734219 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:20.234436 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:20.734193 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:21.234570 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:21.733911 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:22.234508 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:22.734428 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:23.234746 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:23.734082 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:24.234991 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:24.733576 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:25.233829 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:25.734154 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:26.234519 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:26.733231 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:27.234739 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:27.735541 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:28.233853 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:28.734117 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:29.234419 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:29.734906 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:30.234348 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:30.734800 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:31.233851 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:31.733779 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:32.234660 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:32.734110 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:33.234387 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:33.734648 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:34.233910 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:34.734015 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:35.233852 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:35.734311 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:36.234659 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:36.734098 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:37.234623 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:37.735313 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:38.234831 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:38.734209 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:39.234728 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:39.735249 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:40.234689 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:40.733683 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:41.233878 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:41.733799 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:42.234609 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:42.734428 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:43.234680 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:43.733976 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:44.234081 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:44.734091 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:45.234673 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:45.733658 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:46.234656 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:46.733644 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:47.234013 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:47.735200 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:48.234527 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:48.733797 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:49.234006 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:49.733804 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:50.233924 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:50.733609 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:51.234027 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:51.734045 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:52.234522 1234483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 00:52:52.707299 1234483 kapi.go:107] duration metric: took 6m0.00101087s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 00:52:52.710148 1234483 out.go:177] 
	W1101 00:52:52.712516 1234483 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W1101 00:52:52.712535 1234483 out.go:239] * 
	* 
	W1101 00:52:52.718974 1234483 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 00:52:52.721300 1234483 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-992876
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-992876:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b",
	        "Created": "2023-11-01T00:45:35.497418765Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1231900,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T00:45:35.812315478Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bd2c3f7c992aecdf624ceae92825f3a10bf56bd552768efdb49aafbacd808193",
	        "ResolvConfPath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/hosts",
	        "LogPath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b-json.log",
	        "Name": "/ingress-addon-legacy-992876",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-992876:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-992876",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2-init/diff:/var/lib/docker/overlay2/d052914c945f7ab680be56190d2f2374e48b87c8da40d55e2692538d0bc19343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-992876",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-992876/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-992876",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-992876",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-992876",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67fe2b3f7f221255d55acbe0e4fba80c0726a6ec7c376ebbf09d203aea670da3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34307"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34306"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34303"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34305"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34304"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67fe2b3f7f22",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-992876": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f6097a1d1f0",
	                        "ingress-addon-legacy-992876"
	                    ],
	                    "NetworkID": "ed23880bb8b60607bc45c80d538ed0fd6221635cb164fcc8b18d96ae90058ee6",
	                    "EndpointID": "d7884ccf8b8796508791a0f422954ce5cf068a4de8b5e2f0011015110f4bd61d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-992876 -n ingress-addon-legacy-992876
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-992876 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-992876 logs -n 25: (1.408580833s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	| image          | functional-258660 image rm                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image load                                           | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image save --daemon                                  | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/1202897.pem                                             |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /usr/share/ca-certificates/1202897.pem                                 |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/12028972.pem                                            |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /usr/share/ca-certificates/12028972.pem                                |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/test/nested/copy/1202897/hosts                                    |                             |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format short                                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format yaml                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh pgrep                                            | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC |                     |
	|                | buildkitd                                                              |                             |         |                |                     |                     |
	| image          | functional-258660 image build -t                                       | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | localhost/my-image:functional-258660                                   |                             |         |                |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format json                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format table                                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| delete         | -p functional-258660                                                   | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	| start          | -p ingress-addon-legacy-992876                                         | ingress-addon-legacy-992876 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:46 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |                |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |                |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-992876                                            | ingress-addon-legacy-992876 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:46 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |                |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:45:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:45:14.318501 1231442 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:45:14.318675 1231442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:14.318685 1231442 out.go:309] Setting ErrFile to fd 2...
	I1101 00:45:14.318692 1231442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:14.318960 1231442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:45:14.319373 1231442 out.go:303] Setting JSON to false
	I1101 00:45:14.320401 1231442 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30462,"bootTime":1698769053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:45:14.320469 1231442 start.go:138] virtualization:  
	I1101 00:45:14.323115 1231442 out.go:177] * [ingress-addon-legacy-992876] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:45:14.325767 1231442 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:45:14.327614 1231442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:45:14.325897 1231442 notify.go:220] Checking for updates...
	I1101 00:45:14.332237 1231442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:45:14.334269 1231442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:45:14.336378 1231442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 00:45:14.338362 1231442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:45:14.340442 1231442 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:45:14.365550 1231442 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:45:14.365661 1231442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:45:14.445436 1231442 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-01 00:45:14.435868228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:45:14.445545 1231442 docker.go:295] overlay module found
	I1101 00:45:14.448170 1231442 out.go:177] * Using the docker driver based on user configuration
	I1101 00:45:14.450122 1231442 start.go:298] selected driver: docker
	I1101 00:45:14.450140 1231442 start.go:902] validating driver "docker" against <nil>
	I1101 00:45:14.450153 1231442 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:45:14.450822 1231442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:45:14.512897 1231442 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-01 00:45:14.503578004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:45:14.513091 1231442 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 00:45:14.513312 1231442 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:45:14.515476 1231442 out.go:177] * Using Docker driver with root privileges
	I1101 00:45:14.517318 1231442 cni.go:84] Creating CNI manager for ""
	I1101 00:45:14.517338 1231442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:45:14.517350 1231442 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 00:45:14.517364 1231442 start_flags.go:323] config:
	{Name:ingress-addon-legacy-992876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:14.519632 1231442 out.go:177] * Starting control plane node ingress-addon-legacy-992876 in cluster ingress-addon-legacy-992876
	I1101 00:45:14.521922 1231442 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 00:45:14.524084 1231442 out.go:177] * Pulling base image ...
	I1101 00:45:14.525938 1231442 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 00:45:14.525902 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:14.542905 1231442 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon, skipping pull
	I1101 00:45:14.542926 1231442 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 exists in daemon, skipping load
	I1101 00:45:14.589798 1231442 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1101 00:45:14.589822 1231442 cache.go:56] Caching tarball of preloaded images
	I1101 00:45:14.590004 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:14.592111 1231442 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1101 00:45:14.594050 1231442 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:45:14.705131 1231442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1101 00:45:27.657161 1231442 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:45:27.657265 1231442 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:45:28.845377 1231442 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1101 00:45:28.845780 1231442 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/config.json ...
	I1101 00:45:28.845814 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/config.json: {Name:mk856de582bbe0141dd4122b1ee948926d338d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:28.846010 1231442 cache.go:194] Successfully downloaded all kic artifacts
	I1101 00:45:28.846035 1231442 start.go:365] acquiring machines lock for ingress-addon-legacy-992876: {Name:mk5485bd7d6159e0587ed84411769832540520ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:28.846097 1231442 start.go:369] acquired machines lock for "ingress-addon-legacy-992876" in 46.711µs
	I1101 00:45:28.846120 1231442 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-992876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:45:28.846186 1231442 start.go:125] createHost starting for "" (driver="docker")
	I1101 00:45:28.848777 1231442 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 00:45:28.849018 1231442 start.go:159] libmachine.API.Create for "ingress-addon-legacy-992876" (driver="docker")
	I1101 00:45:28.849042 1231442 client.go:168] LocalClient.Create starting
	I1101 00:45:28.849110 1231442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem
	I1101 00:45:28.849146 1231442 main.go:141] libmachine: Decoding PEM data...
	I1101 00:45:28.849162 1231442 main.go:141] libmachine: Parsing certificate...
	I1101 00:45:28.849218 1231442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem
	I1101 00:45:28.849242 1231442 main.go:141] libmachine: Decoding PEM data...
	I1101 00:45:28.849254 1231442 main.go:141] libmachine: Parsing certificate...
	I1101 00:45:28.849598 1231442 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-992876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 00:45:28.869097 1231442 cli_runner.go:211] docker network inspect ingress-addon-legacy-992876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 00:45:28.869592 1231442 network_create.go:281] running [docker network inspect ingress-addon-legacy-992876] to gather additional debugging logs...
	I1101 00:45:28.869618 1231442 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-992876
	W1101 00:45:28.886875 1231442 cli_runner.go:211] docker network inspect ingress-addon-legacy-992876 returned with exit code 1
	I1101 00:45:28.886904 1231442 network_create.go:284] error running [docker network inspect ingress-addon-legacy-992876]: docker network inspect ingress-addon-legacy-992876: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-992876 not found
	I1101 00:45:28.886921 1231442 network_create.go:286] output of [docker network inspect ingress-addon-legacy-992876]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-992876 not found
	
	** /stderr **
	I1101 00:45:28.887032 1231442 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 00:45:28.904784 1231442 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000484bd0}
	I1101 00:45:28.904820 1231442 network_create.go:124] attempt to create docker network ingress-addon-legacy-992876 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 00:45:28.904877 1231442 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 ingress-addon-legacy-992876
	I1101 00:45:28.978799 1231442 network_create.go:108] docker network ingress-addon-legacy-992876 192.168.49.0/24 created
	I1101 00:45:28.978831 1231442 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-992876" container
	I1101 00:45:28.978902 1231442 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 00:45:28.995093 1231442 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-992876 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --label created_by.minikube.sigs.k8s.io=true
	I1101 00:45:29.013853 1231442 oci.go:103] Successfully created a docker volume ingress-addon-legacy-992876
	I1101 00:45:29.013941 1231442 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-992876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --entrypoint /usr/bin/test -v ingress-addon-legacy-992876:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib
	I1101 00:45:30.495413 1231442 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-992876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --entrypoint /usr/bin/test -v ingress-addon-legacy-992876:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib: (1.481428858s)
	I1101 00:45:30.495444 1231442 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-992876
	I1101 00:45:30.495472 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:30.495494 1231442 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 00:45:30.495584 1231442 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-992876:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 00:45:35.415160 1231442 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-992876:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir: (4.919528547s)
	I1101 00:45:35.415192 1231442 kic.go:203] duration metric: took 4.919696 seconds to extract preloaded images to volume
	W1101 00:45:35.415322 1231442 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 00:45:35.415433 1231442 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 00:45:35.481916 1231442 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-992876 --name ingress-addon-legacy-992876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --network ingress-addon-legacy-992876 --ip 192.168.49.2 --volume ingress-addon-legacy-992876:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 00:45:35.820391 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Running}}
	I1101 00:45:35.849347 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:45:35.878107 1231442 cli_runner.go:164] Run: docker exec ingress-addon-legacy-992876 stat /var/lib/dpkg/alternatives/iptables
	I1101 00:45:35.945322 1231442 oci.go:144] the created container "ingress-addon-legacy-992876" has a running status.
	I1101 00:45:35.945352 1231442 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa...
	I1101 00:45:36.565627 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 00:45:36.565716 1231442 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 00:45:36.593366 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:45:36.619419 1231442 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 00:45:36.619439 1231442 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-992876 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 00:45:36.713959 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:45:36.752081 1231442 machine.go:88] provisioning docker machine ...
	I1101 00:45:36.752115 1231442 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-992876"
	I1101 00:45:36.752185 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:36.774269 1231442 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:36.774716 1231442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1101 00:45:36.774745 1231442 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-992876 && echo "ingress-addon-legacy-992876" | sudo tee /etc/hostname
	I1101 00:45:36.945421 1231442 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-992876
	
	I1101 00:45:36.945517 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:36.972963 1231442 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:36.973428 1231442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1101 00:45:36.973454 1231442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-992876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-992876/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-992876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:45:37.122116 1231442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:45:37.122149 1231442 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 00:45:37.122169 1231442 ubuntu.go:177] setting up certificates
	I1101 00:45:37.122178 1231442 provision.go:83] configureAuth start
	I1101 00:45:37.122242 1231442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-992876
	I1101 00:45:37.140872 1231442 provision.go:138] copyHostCerts
	I1101 00:45:37.140921 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 00:45:37.140953 1231442 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 00:45:37.140963 1231442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 00:45:37.141164 1231442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 00:45:37.141267 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 00:45:37.141288 1231442 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 00:45:37.141297 1231442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 00:45:37.141327 1231442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 00:45:37.141375 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 00:45:37.141394 1231442 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 00:45:37.141406 1231442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 00:45:37.141434 1231442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 00:45:37.141543 1231442 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-992876 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-992876]
	I1101 00:45:37.380093 1231442 provision.go:172] copyRemoteCerts
	I1101 00:45:37.380162 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:45:37.380212 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:37.398057 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:37.499773 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:45:37.499855 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:45:37.529417 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:45:37.529482 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:45:37.557515 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:45:37.557576 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 00:45:37.585139 1231442 provision.go:86] duration metric: configureAuth took 462.945778ms
	I1101 00:45:37.585170 1231442 ubuntu.go:193] setting minikube options for container-runtime
	I1101 00:45:37.585369 1231442 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:45:37.585477 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:37.602792 1231442 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:37.603224 1231442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1101 00:45:37.603249 1231442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:45:37.884196 1231442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:45:37.884259 1231442 machine.go:91] provisioned docker machine in 1.132154245s
	I1101 00:45:37.884283 1231442 client.go:171] LocalClient.Create took 9.035234436s
	I1101 00:45:37.884317 1231442 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-992876" took 9.035298174s
	I1101 00:45:37.884359 1231442 start.go:300] post-start starting for "ingress-addon-legacy-992876" (driver="docker")
	I1101 00:45:37.884385 1231442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:45:37.884498 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:45:37.884561 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:37.903555 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.008812 1231442 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:45:38.013061 1231442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 00:45:38.013101 1231442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 00:45:38.013113 1231442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 00:45:38.013120 1231442 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 00:45:38.013131 1231442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 00:45:38.013202 1231442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 00:45:38.013294 1231442 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 00:45:38.013307 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /etc/ssl/certs/12028972.pem
	I1101 00:45:38.013428 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:45:38.024691 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 00:45:38.054805 1231442 start.go:303] post-start completed in 170.414318ms
	I1101 00:45:38.055192 1231442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-992876
	I1101 00:45:38.073117 1231442 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/config.json ...
	I1101 00:45:38.073417 1231442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:45:38.073479 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:38.090985 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.186916 1231442 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 00:45:38.192657 1231442 start.go:128] duration metric: createHost completed in 9.346453334s
	I1101 00:45:38.192725 1231442 start.go:83] releasing machines lock for "ingress-addon-legacy-992876", held for 9.34661559s
	I1101 00:45:38.192804 1231442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-992876
	I1101 00:45:38.210575 1231442 ssh_runner.go:195] Run: cat /version.json
	I1101 00:45:38.210632 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:38.210881 1231442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:45:38.210943 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:38.235030 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.242515 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.333318 1231442 ssh_runner.go:195] Run: systemctl --version
	I1101 00:45:38.475423 1231442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:45:38.625129 1231442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:45:38.630940 1231442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:38.655546 1231442 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 00:45:38.655624 1231442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:38.691824 1231442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 00:45:38.691849 1231442 start.go:472] detecting cgroup driver to use...
	I1101 00:45:38.691883 1231442 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 00:45:38.691936 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:45:38.711143 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:45:38.724574 1231442 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:45:38.724677 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:45:38.742172 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:45:38.759043 1231442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:45:38.859691 1231442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:45:38.958216 1231442 docker.go:220] disabling docker service ...
	I1101 00:45:38.958286 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:45:38.979675 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:45:38.993147 1231442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:45:39.103544 1231442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:45:39.203045 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:45:39.216198 1231442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:45:39.235135 1231442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1101 00:45:39.235265 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.247504 1231442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:45:39.247619 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.259363 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.270923 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.283515 1231442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:45:39.294654 1231442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:45:39.304825 1231442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:45:39.315581 1231442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:45:39.423102 1231442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:45:39.552255 1231442 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:45:39.552325 1231442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:45:39.557175 1231442 start.go:540] Will wait 60s for crictl version
	I1101 00:45:39.557251 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:39.561756 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:45:39.612632 1231442 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 00:45:39.612719 1231442 ssh_runner.go:195] Run: crio --version
	I1101 00:45:39.655103 1231442 ssh_runner.go:195] Run: crio --version
	I1101 00:45:39.701819 1231442 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1101 00:45:39.703705 1231442 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-992876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 00:45:39.721070 1231442 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 00:45:39.725707 1231442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:45:39.738856 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:39.738927 1231442 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:45:39.791263 1231442 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1101 00:45:39.791355 1231442 ssh_runner.go:195] Run: which lz4
	I1101 00:45:39.795959 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1101 00:45:39.796062 1231442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 00:45:39.800196 1231442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:45:39.800231 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1101 00:45:41.996164 1231442 crio.go:444] Took 2.200137 seconds to copy over tarball
	I1101 00:45:41.996241 1231442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 00:45:44.722159 1231442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.725873723s)
	I1101 00:45:44.722190 1231442 crio.go:451] Took 2.726005 seconds to extract the tarball
	I1101 00:45:44.722201 1231442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 00:45:44.938481 1231442 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:45:44.978699 1231442 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1101 00:45:44.978722 1231442 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 00:45:44.978789 1231442 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:44.978791 1231442 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:44.978978 1231442 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:44.978983 1231442 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1101 00:45:44.979059 1231442 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:44.979070 1231442 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:44.979126 1231442 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:44.979136 1231442 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1101 00:45:44.980283 1231442 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:44.980745 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:44.981069 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:44.981112 1231442 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:44.981158 1231442 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1101 00:45:44.981238 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:44.981303 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:44.981066 1231442 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W1101 00:45:45.324221 1231442 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.324596 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1101 00:45:45.354188 1231442 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.354382 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1101 00:45:45.364105 1231442 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.364272 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:45.364610 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1101 00:45:45.375713 1231442 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.375940 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:45.388447 1231442 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1101 00:45:45.388517 1231442 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:45.388569 1231442 ssh_runner.go:195] Run: which crictl
	W1101 00:45:45.400499 1231442 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.400674 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1101 00:45:45.404590 1231442 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.404774 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1101 00:45:45.469470 1231442 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1101 00:45:45.469525 1231442 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:45.469578 1231442 ssh_runner.go:195] Run: which crictl
	W1101 00:45:45.526748 1231442 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.526931 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:45.557441 1231442 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1101 00:45:45.557610 1231442 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1101 00:45:45.557533 1231442 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1101 00:45:45.557662 1231442 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:45.557710 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.557804 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.596567 1231442 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1101 00:45:45.596603 1231442 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:45.596650 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.596730 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:45.596799 1231442 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1101 00:45:45.596815 1231442 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:45.596837 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.596899 1231442 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1101 00:45:45.596912 1231442 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1101 00:45:45.596930 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.597004 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:45.726183 1231442 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1101 00:45:45.726271 1231442 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:45.726343 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.726438 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:45.726511 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1101 00:45:45.726623 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1101 00:45:45.726663 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:45.726734 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1101 00:45:45.726792 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:45.726850 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1101 00:45:45.752473 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:45.891081 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1101 00:45:45.891215 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1101 00:45:45.891241 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1101 00:45:45.891300 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1101 00:45:45.891370 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1101 00:45:45.908275 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 00:45:45.908379 1231442 cache_images.go:92] LoadImages completed in 929.642429ms
	W1101 00:45:45.908467 1231442 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1101 00:45:45.908565 1231442 ssh_runner.go:195] Run: crio config
	I1101 00:45:45.966955 1231442 cni.go:84] Creating CNI manager for ""
	I1101 00:45:45.967023 1231442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:45:45.967073 1231442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:45:45.967116 1231442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-992876 NodeName:ingress-addon-legacy-992876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 00:45:45.967321 1231442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-992876"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:45:45.967444 1231442 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-992876 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:45:45.967550 1231442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1101 00:45:45.978037 1231442 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:45:45.978136 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:45:45.988790 1231442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1101 00:45:46.011544 1231442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1101 00:45:46.033775 1231442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1101 00:45:46.054982 1231442 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 00:45:46.059661 1231442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:45:46.073091 1231442 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876 for IP: 192.168.49.2
	I1101 00:45:46.073121 1231442 certs.go:190] acquiring lock for shared ca certs: {Name:mk19a54d78f5cf4996fdfc5da5ee5226ef1f844f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.073252 1231442 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key
	I1101 00:45:46.073296 1231442 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key
	I1101 00:45:46.073347 1231442 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key
	I1101 00:45:46.073362 1231442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt with IP's: []
	I1101 00:45:46.306794 1231442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt ...
	I1101 00:45:46.306826 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: {Name:mk875a1d5c7486c9a5ed1078452ffb0a1ffb5ae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.307030 1231442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key ...
	I1101 00:45:46.307050 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key: {Name:mk3fd496714d5fd899c9e37395177b9cc2d941e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.307148 1231442 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2
	I1101 00:45:46.307170 1231442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 00:45:46.588347 1231442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2 ...
	I1101 00:45:46.588377 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2: {Name:mkffe2c3cee48d112aec67d7d22d7663057bc731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.588582 1231442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2 ...
	I1101 00:45:46.588598 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2: {Name:mk5925014c6fbae288bd7a39d7b4bd81834fdf97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.588678 1231442 certs.go:337] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt
	I1101 00:45:46.588756 1231442 certs.go:341] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key
	I1101 00:45:46.588813 1231442 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key
	I1101 00:45:46.588832 1231442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt with IP's: []
	I1101 00:45:47.215772 1231442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt ...
	I1101 00:45:47.215806 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt: {Name:mk10cdd726b2e34709ae05f8fec8af4919dd360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:47.216005 1231442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key ...
	I1101 00:45:47.216019 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key: {Name:mk419d291d90ff351ec65e5f8058266b4b67400b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:47.216103 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 00:45:47.216127 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 00:45:47.216145 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 00:45:47.216161 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 00:45:47.216172 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:45:47.216191 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:45:47.216207 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:45:47.216241 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:45:47.216317 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem (1338 bytes)
	W1101 00:45:47.216355 1231442 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897_empty.pem, impossibly tiny 0 bytes
	I1101 00:45:47.216369 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:45:47.216399 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:45:47.216426 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:45:47.216460 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem (1675 bytes)
	I1101 00:45:47.216509 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 00:45:47.216549 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem -> /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.216567 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.216583 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.217190 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:45:47.244847 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 00:45:47.273068 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:45:47.301478 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:45:47.329959 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:45:47.358317 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:45:47.386085 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:45:47.414480 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:45:47.442608 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem --> /usr/share/ca-certificates/1202897.pem (1338 bytes)
	I1101 00:45:47.470908 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /usr/share/ca-certificates/12028972.pem (1708 bytes)
	I1101 00:45:47.499270 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:45:47.528038 1231442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:45:47.548863 1231442 ssh_runner.go:195] Run: openssl version
	I1101 00:45:47.555834 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1202897.pem && ln -fs /usr/share/ca-certificates/1202897.pem /etc/ssl/certs/1202897.pem"
	I1101 00:45:47.567488 1231442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.571950 1231442 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  1 00:39 /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.572014 1231442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.580487 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1202897.pem /etc/ssl/certs/51391683.0"
	I1101 00:45:47.592157 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12028972.pem && ln -fs /usr/share/ca-certificates/12028972.pem /etc/ssl/certs/12028972.pem"
	I1101 00:45:47.603824 1231442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.608493 1231442 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  1 00:39 /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.608561 1231442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.617510 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12028972.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:45:47.629032 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:45:47.640301 1231442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.645223 1231442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.645314 1231442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.653828 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:45:47.665081 1231442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:45:47.669328 1231442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:45:47.669424 1231442 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-992876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:47.669510 1231442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:45:47.669568 1231442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:45:47.710833 1231442 cri.go:89] found id: ""
	I1101 00:45:47.710903 1231442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:45:47.721384 1231442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:45:47.731820 1231442 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1101 00:45:47.731941 1231442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:45:47.742548 1231442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:45:47.742589 1231442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 00:45:47.798714 1231442 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1101 00:45:47.799108 1231442 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 00:45:47.848764 1231442 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1101 00:45:47.848865 1231442 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1101 00:45:47.848925 1231442 kubeadm.go:322] OS: Linux
	I1101 00:45:47.849013 1231442 kubeadm.go:322] CGROUPS_CPU: enabled
	I1101 00:45:47.849092 1231442 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1101 00:45:47.849167 1231442 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1101 00:45:47.849232 1231442 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1101 00:45:47.849310 1231442 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1101 00:45:47.849426 1231442 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1101 00:45:47.942982 1231442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 00:45:47.943146 1231442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 00:45:47.943277 1231442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 00:45:48.192726 1231442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:45:48.194217 1231442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:45:48.194495 1231442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 00:45:48.301465 1231442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:45:48.304764 1231442 out.go:204]   - Generating certificates and keys ...
	I1101 00:45:48.304888 1231442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 00:45:48.305010 1231442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 00:45:48.804619 1231442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 00:45:49.365237 1231442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 00:45:49.846317 1231442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 00:45:50.545660 1231442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 00:45:51.220097 1231442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 00:45:51.220492 1231442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-992876 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 00:45:51.755362 1231442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 00:45:51.755774 1231442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-992876 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 00:45:52.572807 1231442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 00:45:52.854602 1231442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 00:45:53.285504 1231442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 00:45:53.285830 1231442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:45:53.788719 1231442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:45:54.670535 1231442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:45:55.136813 1231442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:45:55.607317 1231442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:45:55.608353 1231442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:45:55.610886 1231442 out.go:204]   - Booting up control plane ...
	I1101 00:45:55.611004 1231442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:45:55.623188 1231442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:45:55.623277 1231442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:45:55.623371 1231442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:45:55.623559 1231442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:46:07.624760 1231442 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002085 seconds
	I1101 00:46:07.624876 1231442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:46:07.642733 1231442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:46:08.160678 1231442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:46:08.160826 1231442 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-992876 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 00:46:08.670863 1231442 kubeadm.go:322] [bootstrap-token] Using token: js3x75.dl52zft1ly2rea4m
	I1101 00:46:08.672909 1231442 out.go:204]   - Configuring RBAC rules ...
	I1101 00:46:08.673052 1231442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:46:08.677272 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:46:08.684464 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:46:08.686980 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:46:08.689528 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:46:08.692966 1231442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:46:08.700857 1231442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:46:08.977880 1231442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 00:46:09.090547 1231442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 00:46:09.094730 1231442 kubeadm.go:322] 
	I1101 00:46:09.094802 1231442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 00:46:09.094808 1231442 kubeadm.go:322] 
	I1101 00:46:09.094880 1231442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 00:46:09.094895 1231442 kubeadm.go:322] 
	I1101 00:46:09.094919 1231442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 00:46:09.094974 1231442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:46:09.095021 1231442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:46:09.095026 1231442 kubeadm.go:322] 
	I1101 00:46:09.095075 1231442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 00:46:09.095145 1231442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:46:09.095208 1231442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:46:09.095213 1231442 kubeadm.go:322] 
	I1101 00:46:09.095291 1231442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:46:09.095363 1231442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 00:46:09.095379 1231442 kubeadm.go:322] 
	I1101 00:46:09.095457 1231442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token js3x75.dl52zft1ly2rea4m \
	I1101 00:46:09.095556 1231442 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 \
	I1101 00:46:09.095578 1231442 kubeadm.go:322]     --control-plane 
	I1101 00:46:09.095583 1231442 kubeadm.go:322] 
	I1101 00:46:09.095661 1231442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:46:09.095666 1231442 kubeadm.go:322] 
	I1101 00:46:09.095742 1231442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token js3x75.dl52zft1ly2rea4m \
	I1101 00:46:09.095852 1231442 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 
	I1101 00:46:09.099242 1231442 kubeadm.go:322] W1101 00:45:47.797819    1233 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1101 00:46:09.099466 1231442 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1101 00:46:09.099573 1231442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:46:09.099698 1231442 kubeadm.go:322] W1101 00:45:55.618121    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 00:46:09.099823 1231442 kubeadm.go:322] W1101 00:45:55.619306    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 00:46:09.099841 1231442 cni.go:84] Creating CNI manager for ""
	I1101 00:46:09.099849 1231442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:46:09.102282 1231442 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:46:09.104070 1231442 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:46:09.109092 1231442 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1101 00:46:09.109116 1231442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:46:09.133630 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:46:09.548796 1231442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:46:09.548940 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:09.549034 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=ingress-addon-legacy-992876 minikube.k8s.io/updated_at=2023_11_01T00_46_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:09.687609 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:09.687623 1231442 ops.go:34] apiserver oom_adj: -16
	I1101 00:46:09.807030 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:10.402192 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:10.901664 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:11.401949 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:11.901717 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:12.402214 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:12.902529 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:13.401649 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:13.902591 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:14.402220 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:14.902204 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:15.401746 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:15.901685 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:16.402319 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:16.901959 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:17.402241 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:17.901650 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:18.402204 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:18.902429 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:19.401790 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:19.902168 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:20.402323 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:20.902615 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:21.402126 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:21.902662 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:22.402599 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:22.902362 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:23.401685 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:23.902165 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:24.054520 1231442 kubeadm.go:1081] duration metric: took 14.505631799s to wait for elevateKubeSystemPrivileges.
	I1101 00:46:24.054550 1231442 kubeadm.go:406] StartCluster complete in 36.385130744s
	I1101 00:46:24.054576 1231442 settings.go:142] acquiring lock: {Name:mke36bce3f316e572c27d9ade5690ad307116f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:24.054637 1231442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:46:24.055354 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/kubeconfig: {Name:mk54047efde1577abb33547e94416477b8fd3071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:24.056085 1231442 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:24.057312 1231442 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:46:24.057394 1231442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:46:24.057537 1231442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:46:24.057633 1231442 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-992876"
	I1101 00:46:24.057652 1231442 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-992876"
	I1101 00:46:24.057709 1231442 host.go:66] Checking if "ingress-addon-legacy-992876" exists ...
	I1101 00:46:24.058197 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:24.058855 1231442 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:46:24.059340 1231442 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-992876"
	I1101 00:46:24.059358 1231442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-992876"
	I1101 00:46:24.059656 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:24.111731 1231442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:46:24.114247 1231442 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:46:24.114266 1231442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:46:24.114328 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:46:24.112485 1231442 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:24.114759 1231442 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-992876"
	I1101 00:46:24.114788 1231442 host.go:66] Checking if "ingress-addon-legacy-992876" exists ...
	I1101 00:46:24.115243 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:24.148132 1231442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-992876" context rescaled to 1 replicas
	I1101 00:46:24.148172 1231442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:46:24.151741 1231442 out.go:177] * Verifying Kubernetes components...
	I1101 00:46:24.153504 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:46:24.171908 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:46:24.181025 1231442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:46:24.181045 1231442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:46:24.181106 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:46:24.223709 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:46:24.295527 1231442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 00:46:24.296319 1231442 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:24.296858 1231442 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-992876" to be "Ready" ...
	I1101 00:46:24.365071 1231442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:46:24.426344 1231442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:46:24.776806 1231442 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 00:46:24.877572 1231442 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 00:46:24.879094 1231442 addons.go:502] enable addons completed in 821.549792ms: enabled=[storage-provisioner default-storageclass]
	I1101 00:46:26.379543 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:28.875582 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:30.875959 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:33.376065 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:35.376428 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:37.876364 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:40.376351 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:42.876688 1231442 node_ready.go:49] node "ingress-addon-legacy-992876" has status "Ready":"True"
	I1101 00:46:42.876717 1231442 node_ready.go:38] duration metric: took 18.579810147s waiting for node "ingress-addon-legacy-992876" to be "Ready" ...
	I1101 00:46:42.876729 1231442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:46:42.885565 1231442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-447wp" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:44.896047 1231442 pod_ready.go:102] pod "coredns-66bff467f8-447wp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 00:46:24 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 00:46:47.398404 1231442 pod_ready.go:102] pod "coredns-66bff467f8-447wp" in "kube-system" namespace has status "Ready":"False"
	I1101 00:46:49.900110 1231442 pod_ready.go:102] pod "coredns-66bff467f8-447wp" in "kube-system" namespace has status "Ready":"False"
	I1101 00:46:50.397766 1231442 pod_ready.go:92] pod "coredns-66bff467f8-447wp" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.397792 1231442 pod_ready.go:81] duration metric: took 7.512147824s waiting for pod "coredns-66bff467f8-447wp" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.397807 1231442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.402363 1231442 pod_ready.go:92] pod "etcd-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.402387 1231442 pod_ready.go:81] duration metric: took 4.573071ms waiting for pod "etcd-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.402402 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.406768 1231442 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.406793 1231442 pod_ready.go:81] duration metric: took 4.383386ms waiting for pod "kube-apiserver-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.406805 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.411428 1231442 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.411453 1231442 pod_ready.go:81] duration metric: took 4.639859ms waiting for pod "kube-controller-manager-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.411464 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxwkc" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.416128 1231442 pod_ready.go:92] pod "kube-proxy-qxwkc" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.416155 1231442 pod_ready.go:81] duration metric: took 4.683946ms waiting for pod "kube-proxy-qxwkc" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.416166 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.593537 1231442 request.go:629] Waited for 177.31245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-992876
	I1101 00:46:50.793539 1231442 request.go:629] Waited for 197.35082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-992876
	I1101 00:46:50.796140 1231442 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.796166 1231442 pod_ready.go:81] duration metric: took 379.992773ms waiting for pod "kube-scheduler-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.796180 1231442 pod_ready.go:38] duration metric: took 7.919438827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:46:50.796205 1231442 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:46:50.796271 1231442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:46:50.809038 1231442 api_server.go:72] duration metric: took 26.66083189s to wait for apiserver process to appear ...
	I1101 00:46:50.809063 1231442 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:46:50.809079 1231442 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 00:46:50.817782 1231442 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 00:46:50.818762 1231442 api_server.go:141] control plane version: v1.18.20
	I1101 00:46:50.818788 1231442 api_server.go:131] duration metric: took 9.717206ms to wait for apiserver health ...
	I1101 00:46:50.818797 1231442 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:46:50.993198 1231442 request.go:629] Waited for 174.307351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1101 00:46:50.999133 1231442 system_pods.go:59] 8 kube-system pods found
	I1101 00:46:50.999178 1231442 system_pods.go:61] "coredns-66bff467f8-447wp" [bd34668c-987e-41fe-8236-9e2c434eee33] Running
	I1101 00:46:50.999185 1231442 system_pods.go:61] "etcd-ingress-addon-legacy-992876" [beec4855-e8f5-4625-a517-ca298207b5b9] Running
	I1101 00:46:50.999192 1231442 system_pods.go:61] "kindnet-d4npj" [14459195-556d-40fb-a096-0a434c3c0177] Running
	I1101 00:46:50.999197 1231442 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-992876" [7d8f1186-f057-4e39-9cc0-0b276174d187] Running
	I1101 00:46:50.999203 1231442 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-992876" [f3b92fad-1e62-4f94-9a38-95e6157de794] Running
	I1101 00:46:50.999208 1231442 system_pods.go:61] "kube-proxy-qxwkc" [f519b66a-24e3-4796-bbab-a043a2e7104f] Running
	I1101 00:46:50.999213 1231442 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-992876" [b9a94eb0-3306-4928-8228-2ed84b0f7dd1] Running
	I1101 00:46:50.999221 1231442 system_pods.go:61] "storage-provisioner" [b090f608-18cc-4c75-b85f-08c99204530c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:46:50.999229 1231442 system_pods.go:74] duration metric: took 180.424263ms to wait for pod list to return data ...
	I1101 00:46:50.999238 1231442 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:46:51.193601 1231442 request.go:629] Waited for 194.272475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:46:51.196243 1231442 default_sa.go:45] found service account: "default"
	I1101 00:46:51.196268 1231442 default_sa.go:55] duration metric: took 197.0237ms for default service account to be created ...
	I1101 00:46:51.196288 1231442 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:46:51.393619 1231442 request.go:629] Waited for 197.256855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1101 00:46:51.400830 1231442 system_pods.go:86] 8 kube-system pods found
	I1101 00:46:51.400861 1231442 system_pods.go:89] "coredns-66bff467f8-447wp" [bd34668c-987e-41fe-8236-9e2c434eee33] Running
	I1101 00:46:51.400869 1231442 system_pods.go:89] "etcd-ingress-addon-legacy-992876" [beec4855-e8f5-4625-a517-ca298207b5b9] Running
	I1101 00:46:51.400875 1231442 system_pods.go:89] "kindnet-d4npj" [14459195-556d-40fb-a096-0a434c3c0177] Running
	I1101 00:46:51.400889 1231442 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-992876" [7d8f1186-f057-4e39-9cc0-0b276174d187] Running
	I1101 00:46:51.400899 1231442 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-992876" [f3b92fad-1e62-4f94-9a38-95e6157de794] Running
	I1101 00:46:51.400904 1231442 system_pods.go:89] "kube-proxy-qxwkc" [f519b66a-24e3-4796-bbab-a043a2e7104f] Running
	I1101 00:46:51.400917 1231442 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-992876" [b9a94eb0-3306-4928-8228-2ed84b0f7dd1] Running
	I1101 00:46:51.400930 1231442 system_pods.go:89] "storage-provisioner" [b090f608-18cc-4c75-b85f-08c99204530c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:46:51.400943 1231442 system_pods.go:126] duration metric: took 204.648128ms to wait for k8s-apps to be running ...
	I1101 00:46:51.400951 1231442 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:46:51.401028 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:46:51.426949 1231442 system_svc.go:56] duration metric: took 25.985468ms WaitForService to wait for kubelet.
	I1101 00:46:51.426996 1231442 kubeadm.go:581] duration metric: took 27.278782752s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:46:51.427020 1231442 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:46:51.593384 1231442 request.go:629] Waited for 166.276969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1101 00:46:51.596917 1231442 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 00:46:51.596949 1231442 node_conditions.go:123] node cpu capacity is 2
	I1101 00:46:51.596962 1231442 node_conditions.go:105] duration metric: took 169.936315ms to run NodePressure ...
	I1101 00:46:51.596974 1231442 start.go:228] waiting for startup goroutines ...
	I1101 00:46:51.597004 1231442 start.go:233] waiting for cluster config update ...
	I1101 00:46:51.597015 1231442 start.go:242] writing updated cluster config ...
	I1101 00:46:51.597320 1231442 ssh_runner.go:195] Run: rm -f paused
	I1101 00:46:51.683524 1231442 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1101 00:46:51.686138 1231442 out.go:177] 
	W1101 00:46:51.688521 1231442 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1101 00:46:51.690392 1231442 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1101 00:46:51.692271 1231442 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-992876" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 01 00:51:27 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:27.390161887Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=dd317d9c-0e46-4223-9107-937bcad79b3e name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:51:27 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:27.390707929Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a403b097-4bcd-4945-9415-67e674e6cf2b name=/runtime.v1alpha2.ImageService/PullImage
	Nov 01 00:51:27 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:27.392586995Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:51:31 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:31.390248601Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=cd8d538d-90e7-4733-b472-ff5b7e4c0be9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:51:31 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:31.390516736Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=cd8d538d-90e7-4733-b472-ff5b7e4c0be9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:51:46 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:46.390050743Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=dfdf0dcc-7a96-4a8a-a5d1-64631e03028c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:51:46 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:46.390353544Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=dfdf0dcc-7a96-4a8a-a5d1-64631e03028c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:51:58 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:58.389986525Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8999a189-d80a-47d6-b22e-fad931640d50 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:51:58 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:51:58.390263258Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=8999a189-d80a-47d6-b22e-fad931640d50 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:12 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:12.390199051Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=4f47628d-bdfa-4af9-9cdc-1961a3ff1038 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:12 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:12.390476621Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=4f47628d-bdfa-4af9-9cdc-1961a3ff1038 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:23 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:23.389950898Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ce96478e-3b2e-412e-b2d2-a3d1a598a9fc name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:23 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:23.390228148Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ce96478e-3b2e-412e-b2d2-a3d1a598a9fc name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:24 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:24.390044071Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=e94129dc-33fd-494c-8117-6cce9bbc75fd name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:24 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:24.390317055Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=e94129dc-33fd-494c-8117-6cce9bbc75fd name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:37 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:37.389929414Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=16955dff-54c6-48f7-8615-cc6170296f7e name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:37 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:37.390212548Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=16955dff-54c6-48f7-8615-cc6170296f7e name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:38 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:38.389957135Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=9e7e7cc7-10e5-435f-87b4-42f13242f334 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:38 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:38.390223907Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=9e7e7cc7-10e5-435f-87b4-42f13242f334 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:50 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:50.389945307Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=02227d7c-07b5-46c0-a90d-287743ce3c5d name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:50 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:50.390215870Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=02227d7c-07b5-46c0-a90d-287743ce3c5d name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:50 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:50.390733661Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=32d1bb94-2bc9-4427-b657-a7256e908d2a name=/runtime.v1alpha2.ImageService/PullImage
	Nov 01 00:52:50 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:50.393179001Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:52 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:52.390264467Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=fd069128-cd7c-41c2-8162-4b9e2e6d0534 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:52:52 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:52:52.390536605Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=fd069128-cd7c-41c2-8162-4b9e2e6d0534 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b41f897ebff0       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   a6c9d905b3c1d       storage-provisioner
	0df625f0dfda5       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   7dda86566ca3c       coredns-66bff467f8-447wp
	2e16f8346f39e       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   d70f9e4820a4e       kindnet-d4npj
	1e7d915d10b43       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   0ab78bb81fc11       kube-proxy-qxwkc
	39f31514e884c       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  6 minutes ago       Running             kube-controller-manager   0                   541d6044ef64c       kube-controller-manager-ingress-addon-legacy-992876
	8ad01671a57d6       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  6 minutes ago       Running             kube-scheduler            0                   e1b9ca063c0a5       kube-scheduler-ingress-addon-legacy-992876
	8e4ec398cc7c4       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  6 minutes ago       Running             etcd                      0                   987bd08ca5697       etcd-ingress-addon-legacy-992876
	a0d57dc63c1b3       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   15fb43877ef74       kube-apiserver-ingress-addon-legacy-992876
	
	* 
	* ==> coredns [0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:59674 - 51483 "HINFO IN 6518257975335028987.8642677448013009581. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01330515s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-992876
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-992876
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=ingress-addon-legacy-992876
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_46_09_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-992876
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:52:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-992876
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 deb9daf9c2264630b846097fd1294d82
	  System UUID:                87f616e1-2ec9-4616-b8fd-46b18f0be87b
	  Boot ID:                    11045d5e-2454-4ceb-8984-3078b90f4cad
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-xsccv                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-admission-patch-6k5st                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-cqvqs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-66bff467f8-447wp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m31s
	  kube-system                 etcd-ingress-addon-legacy-992876                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kindnet-d4npj                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-992876             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-992876    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-proxy-qxwkc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-992876             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  6m56s (x4 over 6m56s)  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s (x4 over 6m56s)  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s (x4 over 6m56s)  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m42s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m42s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m42s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m29s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m12s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000767] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001063] FS-Cache: N-key=[8] '70643b0000000000'
	[  +0.004430] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000527cc4c3
	[  +0.001080] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000008a5a3042
	[  +0.001070] FS-Cache: N-key=[8] '70643b0000000000'
	[  +2.029136] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001008] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000d9fe484b
	[  +0.001140] FS-Cache: O-key=[8] '6f643b0000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001074] FS-Cache: N-key=[8] '6f643b0000000000'
	[  +0.310063] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=000000005bafb08b
	[  +0.001102] FS-Cache: O-key=[8] '75643b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=00000000763bdf7d
	[  +0.001071] FS-Cache: N-key=[8] '75643b0000000000'
	
	* 
	* ==> etcd [8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919] <==
	* raft2023/11/01 00:46:01 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/01 00:46:01 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/01 00:46:01 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-01 00:46:01.201100 W | auth: simple token is not cryptographically signed
	2023-11-01 00:46:01.208147 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-01 00:46:01.210202 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-01 00:46:01.210359 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-01 00:46:01.210580 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-01 00:46:01.211055 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/01 00:46:01 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-01 00:46:01.211320 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/01 00:46:02 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-01 00:46:02.160906 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-01 00:46:02.161621 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-01 00:46:02.161708 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-01 00:46:02.161765 I | etcdserver: published {Name:ingress-addon-legacy-992876 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-01 00:46:02.161882 I | embed: ready to serve client requests
	2023-11-01 00:46:02.163481 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-01 00:46:02.171187 I | embed: ready to serve client requests
	2023-11-01 00:46:02.172370 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-01 00:46:24.612977 W | etcdserver: request "header:<ID:8128024845207824250 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/ingress-addon-legacy-992876.17935938ac7d0358\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/ingress-addon-legacy-992876.17935938ac7d0358\" value_size:668 lease:8128024845207823848 >> failure:<>>" with result "size:16" took too long (136.143806ms) to execute
	
	* 
	* ==> kernel <==
	*  00:52:54 up  8:35,  0 users,  load average: 0.33, 0.59, 1.19
	Linux ingress-addon-legacy-992876 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de] <==
	* I1101 00:50:47.911070       1 main.go:227] handling current node
	I1101 00:50:57.919805       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:50:57.919833       1 main.go:227] handling current node
	I1101 00:51:07.928651       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:51:07.928680       1 main.go:227] handling current node
	I1101 00:51:17.932502       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:51:17.932528       1 main.go:227] handling current node
	I1101 00:51:27.935483       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:51:27.935512       1 main.go:227] handling current node
	I1101 00:51:37.943933       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:51:37.943962       1 main.go:227] handling current node
	I1101 00:51:47.947203       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:51:47.947229       1 main.go:227] handling current node
	I1101 00:51:57.954995       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:51:57.955025       1 main.go:227] handling current node
	I1101 00:52:07.963900       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:07.963930       1 main.go:227] handling current node
	I1101 00:52:17.972793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:17.972817       1 main.go:227] handling current node
	I1101 00:52:27.983196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:27.983338       1 main.go:227] handling current node
	I1101 00:52:37.993234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:37.993263       1 main.go:227] handling current node
	I1101 00:52:47.996709       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:47.996738       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7] <==
	* I1101 00:46:06.128604       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I1101 00:46:06.129128       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1101 00:46:06.129137       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1101 00:46:06.252384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:46:06.252471       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:46:06.256533       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:46:06.275156       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1101 00:46:06.343839       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1101 00:46:07.042294       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1101 00:46:07.042427       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1101 00:46:07.055698       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1101 00:46:07.059587       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1101 00:46:07.059608       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1101 00:46:07.451960       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:46:07.491101       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1101 00:46:07.630202       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 00:46:07.631168       1 controller.go:609] quota admission added evaluator for: endpoints
	I1101 00:46:07.636820       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:46:08.423092       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1101 00:46:08.950649       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1101 00:46:09.079020       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1101 00:46:12.321989       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:46:23.885108       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1101 00:46:24.453509       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1101 00:46:52.588524       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12] <==
	* I1101 00:46:24.154532       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a1c89c3e-bdf9-4185-90a9-53fb37d1fd7a", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1101 00:46:24.231592       1 shared_informer.go:230] Caches are synced for attach detach 
	I1101 00:46:24.249384       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"319e336a-6f08-49bf-9df9-4b34bafabe84", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-b69k9
	I1101 00:46:24.368892       1 shared_informer.go:230] Caches are synced for HPA 
	I1101 00:46:24.395583       1 shared_informer.go:230] Caches are synced for taint 
	I1101 00:46:24.395772       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1101 00:46:24.395854       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-992876. Assuming now as a timestamp.
	I1101 00:46:24.395925       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1101 00:46:24.396281       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1101 00:46:24.397485       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-992876", UID:"2c8b9661-d643-497a-9b42-94d8da4503ba", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-992876 event: Registered Node ingress-addon-legacy-992876 in Controller
	I1101 00:46:24.421602       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1101 00:46:24.445266       1 shared_informer.go:230] Caches are synced for resource quota 
	I1101 00:46:24.445495       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1101 00:46:24.445593       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1101 00:46:24.482241       1 shared_informer.go:230] Caches are synced for resource quota 
	I1101 00:46:24.482383       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1101 00:46:24.708328       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d30a1030-3d5c-4d82-a3b1-451858b49c94", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-qxwkc
	I1101 00:46:24.739607       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a4504450-622a-4a0e-bfb4-8e77219eb7ce", APIVersion:"apps/v1", ResourceVersion:"229", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-d4npj
	E1101 00:46:24.822618       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"a4504450-622a-4a0e-bfb4-8e77219eb7ce", ResourceVersion:"229", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63834396369, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001486cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001486ce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001486d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001486d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001486d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001486d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001486d80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001486dc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000882aa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400036be98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005003f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e600)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400036bee0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1101 00:46:44.396861       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1101 00:46:52.580338       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"91ae13a1-44b8-4e10-b1ed-a96c04c9f131", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1101 00:46:52.603434       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9921b77c-8a0c-46b4-a428-76b1cb6477c6", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-xsccv
	I1101 00:46:52.605925       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"39fc3162-e643-439a-b842-a981e2da17d8", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-cqvqs
	I1101 00:46:52.662556       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e31ec691-8dee-48dc-85c7-22477f81feb9", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-6k5st
	
	* 
	* ==> kube-proxy [1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb] <==
	* W1101 00:46:25.287939       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1101 00:46:25.299350       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1101 00:46:25.299400       1 server_others.go:186] Using iptables Proxier.
	I1101 00:46:25.299745       1 server.go:583] Version: v1.18.20
	I1101 00:46:25.306461       1 config.go:315] Starting service config controller
	I1101 00:46:25.306495       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1101 00:46:25.306557       1 config.go:133] Starting endpoints config controller
	I1101 00:46:25.306569       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1101 00:46:25.406666       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1101 00:46:25.406667       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd] <==
	* W1101 00:46:06.126196       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:46:06.244260       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1101 00:46:06.244365       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1101 00:46:06.247241       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1101 00:46:06.247419       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:46:06.247452       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:46:06.247501       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1101 00:46:06.256976       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 00:46:06.281343       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:46:06.281558       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:46:06.281709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:46:06.281827       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:46:06.281956       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 00:46:06.282091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 00:46:06.282239       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:46:06.282376       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:46:06.282484       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 00:46:06.282615       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 00:46:06.287799       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:46:07.180712       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:46:07.207189       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 00:46:07.271625       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:46:07.294980       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:46:07.295523       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1101 00:46:10.347586       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 01 00:50:51 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:50:51.390396    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:51:02 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:02.694058    1646 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Nov 01 00:51:02 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:02.694158    1646 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/259c50c3-4143-48aa-aaf4-6d93e0e55b33-webhook-cert podName:259c50c3-4143-48aa-aaf4-6d93e0e55b33 nodeName:}" failed. No retries permitted until 2023-11-01 00:53:04.69413306 +0000 UTC m=+415.802593669 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/259c50c3-4143-48aa-aaf4-6d93e0e55b33-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-cqvqs\" (UID: \"259c50c3-4143-48aa-aaf4-6d93e0e55b33\") : secret \"ingress-nginx-admission\" not found"
	Nov 01 00:51:03 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:03.390409    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:51:12 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:12.474126    1646 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b, memory: /docker/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/system.slice/kubelet.service
	Nov 01 00:51:13 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:13.389773    1646 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-cqvqs_ingress-nginx(259c50c3-4143-48aa-aaf4-6d93e0e55b33)": unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-nvrbf webhook-cert]: timed out waiting for the condition; skipping pod
	Nov 01 00:51:13 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:13.389834    1646 pod_workers.go:191] Error syncing pod 259c50c3-4143-48aa-aaf4-6d93e0e55b33 ("ingress-nginx-controller-7fcf777cb7-cqvqs_ingress-nginx(259c50c3-4143-48aa-aaf4-6d93e0e55b33)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-nvrbf webhook-cert]: timed out waiting for the condition
	Nov 01 00:51:14 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:14.390508    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:51:20 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:20.666000    1646 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:51:20 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:20.666062    1646 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:51:20 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:20.666145    1646 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:51:20 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:20.666181    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 01 00:51:31 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:31.390722    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:51:46 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:46.390697    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:51:58 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:51:58.390740    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:11.899053    1646 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:52:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:11.899117    1646 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:52:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:11.899179    1646 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:52:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:11.899211    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 01 00:52:12 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:12.390667    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:23 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:23.390433    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:24 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:24.390483    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:37 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:37.390617    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:38 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:38.391171    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:52:52 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:52:52.390969    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da] <==
	* I1101 00:46:51.696853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 00:46:51.727607       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 00:46:51.727767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 00:46:51.736033       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 00:46:51.737290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-992876_226eb49a-00a2-408b-abd8-86b18910b449!
	I1101 00:46:51.740172       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7bfbf796-a01d-48f6-a327-a5426ba3862c", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-992876_226eb49a-00a2-408b-abd8-86b18910b449 became leader
	I1101 00:46:51.837994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-992876_226eb49a-00a2-408b-abd8-86b18910b449!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-992876 -n ingress-addon-legacy-992876
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-992876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-992876 describe pod ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-992876 describe pod ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs: exit status 1 (93.470555ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xsccv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6k5st" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-cqvqs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-992876 describe pod ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-992876 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-992876 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.069816888s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-cqvqs

                                                
                                                
** /stderr **
addons_test.go:207: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-992876
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-992876:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b",
	        "Created": "2023-11-01T00:45:35.497418765Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1231900,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T00:45:35.812315478Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bd2c3f7c992aecdf624ceae92825f3a10bf56bd552768efdb49aafbacd808193",
	        "ResolvConfPath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/hostname",
	        "HostsPath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/hosts",
	        "LogPath": "/var/lib/docker/containers/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b/6f6097a1d1f0f792ad375e49cb6c78d21f9cf8eff3f4d077c6ef47d29131989b-json.log",
	        "Name": "/ingress-addon-legacy-992876",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-992876:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-992876",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2-init/diff:/var/lib/docker/overlay2/d052914c945f7ab680be56190d2f2374e48b87c8da40d55e2692538d0bc19343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92d79260e9d714ebc09d59a636cb15fd32b324c68cb63a80cfedebadbaa88cf2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-992876",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-992876/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-992876",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-992876",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-992876",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67fe2b3f7f221255d55acbe0e4fba80c0726a6ec7c376ebbf09d203aea670da3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34307"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34306"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34303"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34305"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34304"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/67fe2b3f7f22",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-992876": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6f6097a1d1f0",
	                        "ingress-addon-legacy-992876"
	                    ],
	                    "NetworkID": "ed23880bb8b60607bc45c80d538ed0fd6221635cb164fcc8b18d96ae90058ee6",
	                    "EndpointID": "d7884ccf8b8796508791a0f422954ce5cf068a4de8b5e2f0011015110f4bd61d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-992876 -n ingress-addon-legacy-992876
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-992876 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-992876 logs -n 25: (1.412229298s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	| image          | functional-258660 image ls                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image load                                           | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	| image          | functional-258660 image save --daemon                                  | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-258660               |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/1202897.pem                                             |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /usr/share/ca-certificates/1202897.pem                                 |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/12028972.pem                                            |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /usr/share/ca-certificates/12028972.pem                                |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh sudo cat                                         | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:43 UTC |
	|                | /etc/test/nested/copy/1202897/hosts                                    |                             |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:43 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format short                                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format yaml                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| ssh            | functional-258660 ssh pgrep                                            | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC |                     |
	|                | buildkitd                                                              |                             |         |                |                     |                     |
	| image          | functional-258660 image build -t                                       | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | localhost/my-image:functional-258660                                   |                             |         |                |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |                |                     |                     |
	| image          | functional-258660 image ls                                             | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format json                                                 |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| image          | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | image ls --format table                                                |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| update-context | functional-258660                                                      | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:44 UTC | 01 Nov 23 00:44 UTC |
	|                | update-context                                                         |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |                |                     |                     |
	| delete         | -p functional-258660                                                   | functional-258660           | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:45 UTC |
	| start          | -p ingress-addon-legacy-992876                                         | ingress-addon-legacy-992876 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:45 UTC | 01 Nov 23 00:46 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |                |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |                |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |                |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |                |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-992876                                            | ingress-addon-legacy-992876 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:46 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |                |                     |                     |
	| addons         | ingress-addon-legacy-992876                                            | ingress-addon-legacy-992876 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:52 UTC | 01 Nov 23 00:52 UTC |
	|                | addons enable ingress-dns                                              |                             |         |                |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |                |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:45:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:45:14.318501 1231442 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:45:14.318675 1231442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:14.318685 1231442 out.go:309] Setting ErrFile to fd 2...
	I1101 00:45:14.318692 1231442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:45:14.318960 1231442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:45:14.319373 1231442 out.go:303] Setting JSON to false
	I1101 00:45:14.320401 1231442 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30462,"bootTime":1698769053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:45:14.320469 1231442 start.go:138] virtualization:  
	I1101 00:45:14.323115 1231442 out.go:177] * [ingress-addon-legacy-992876] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:45:14.325767 1231442 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:45:14.327614 1231442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:45:14.325897 1231442 notify.go:220] Checking for updates...
	I1101 00:45:14.332237 1231442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:45:14.334269 1231442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:45:14.336378 1231442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 00:45:14.338362 1231442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:45:14.340442 1231442 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:45:14.365550 1231442 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:45:14.365661 1231442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:45:14.445436 1231442 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-01 00:45:14.435868228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:45:14.445545 1231442 docker.go:295] overlay module found
	I1101 00:45:14.448170 1231442 out.go:177] * Using the docker driver based on user configuration
	I1101 00:45:14.450122 1231442 start.go:298] selected driver: docker
	I1101 00:45:14.450140 1231442 start.go:902] validating driver "docker" against <nil>
	I1101 00:45:14.450153 1231442 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:45:14.450822 1231442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:45:14.512897 1231442 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-01 00:45:14.503578004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:45:14.513091 1231442 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 00:45:14.513312 1231442 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 00:45:14.515476 1231442 out.go:177] * Using Docker driver with root privileges
	I1101 00:45:14.517318 1231442 cni.go:84] Creating CNI manager for ""
	I1101 00:45:14.517338 1231442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:45:14.517350 1231442 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 00:45:14.517364 1231442 start_flags.go:323] config:
	{Name:ingress-addon-legacy-992876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:14.519632 1231442 out.go:177] * Starting control plane node ingress-addon-legacy-992876 in cluster ingress-addon-legacy-992876
	I1101 00:45:14.521922 1231442 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 00:45:14.524084 1231442 out.go:177] * Pulling base image ...
	I1101 00:45:14.525938 1231442 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 00:45:14.525902 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:14.542905 1231442 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon, skipping pull
	I1101 00:45:14.542926 1231442 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 exists in daemon, skipping load
	I1101 00:45:14.589798 1231442 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1101 00:45:14.589822 1231442 cache.go:56] Caching tarball of preloaded images
	I1101 00:45:14.590004 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:14.592111 1231442 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1101 00:45:14.594050 1231442 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:45:14.705131 1231442 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1101 00:45:27.657161 1231442 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:45:27.657265 1231442 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:45:28.845377 1231442 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1101 00:45:28.845780 1231442 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/config.json ...
	I1101 00:45:28.845814 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/config.json: {Name:mk856de582bbe0141dd4122b1ee948926d338d6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:28.846010 1231442 cache.go:194] Successfully downloaded all kic artifacts
	I1101 00:45:28.846035 1231442 start.go:365] acquiring machines lock for ingress-addon-legacy-992876: {Name:mk5485bd7d6159e0587ed84411769832540520ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 00:45:28.846097 1231442 start.go:369] acquired machines lock for "ingress-addon-legacy-992876" in 46.711µs
	I1101 00:45:28.846120 1231442 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-992876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:45:28.846186 1231442 start.go:125] createHost starting for "" (driver="docker")
	I1101 00:45:28.848777 1231442 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 00:45:28.849018 1231442 start.go:159] libmachine.API.Create for "ingress-addon-legacy-992876" (driver="docker")
	I1101 00:45:28.849042 1231442 client.go:168] LocalClient.Create starting
	I1101 00:45:28.849110 1231442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem
	I1101 00:45:28.849146 1231442 main.go:141] libmachine: Decoding PEM data...
	I1101 00:45:28.849162 1231442 main.go:141] libmachine: Parsing certificate...
	I1101 00:45:28.849218 1231442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem
	I1101 00:45:28.849242 1231442 main.go:141] libmachine: Decoding PEM data...
	I1101 00:45:28.849254 1231442 main.go:141] libmachine: Parsing certificate...
	I1101 00:45:28.849598 1231442 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-992876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 00:45:28.869097 1231442 cli_runner.go:211] docker network inspect ingress-addon-legacy-992876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 00:45:28.869592 1231442 network_create.go:281] running [docker network inspect ingress-addon-legacy-992876] to gather additional debugging logs...
	I1101 00:45:28.869618 1231442 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-992876
	W1101 00:45:28.886875 1231442 cli_runner.go:211] docker network inspect ingress-addon-legacy-992876 returned with exit code 1
	I1101 00:45:28.886904 1231442 network_create.go:284] error running [docker network inspect ingress-addon-legacy-992876]: docker network inspect ingress-addon-legacy-992876: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-992876 not found
	I1101 00:45:28.886921 1231442 network_create.go:286] output of [docker network inspect ingress-addon-legacy-992876]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-992876 not found
	
	** /stderr **
	I1101 00:45:28.887032 1231442 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 00:45:28.904784 1231442 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000484bd0}
	I1101 00:45:28.904820 1231442 network_create.go:124] attempt to create docker network ingress-addon-legacy-992876 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 00:45:28.904877 1231442 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 ingress-addon-legacy-992876
	I1101 00:45:28.978799 1231442 network_create.go:108] docker network ingress-addon-legacy-992876 192.168.49.0/24 created
	I1101 00:45:28.978831 1231442 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-992876" container
	I1101 00:45:28.978902 1231442 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 00:45:28.995093 1231442 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-992876 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --label created_by.minikube.sigs.k8s.io=true
	I1101 00:45:29.013853 1231442 oci.go:103] Successfully created a docker volume ingress-addon-legacy-992876
	I1101 00:45:29.013941 1231442 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-992876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --entrypoint /usr/bin/test -v ingress-addon-legacy-992876:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib
	I1101 00:45:30.495413 1231442 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-992876-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --entrypoint /usr/bin/test -v ingress-addon-legacy-992876:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib: (1.481428858s)
	I1101 00:45:30.495444 1231442 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-992876
	I1101 00:45:30.495472 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:30.495494 1231442 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 00:45:30.495584 1231442 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-992876:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 00:45:35.415160 1231442 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-992876:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir: (4.919528547s)
	I1101 00:45:35.415192 1231442 kic.go:203] duration metric: took 4.919696 seconds to extract preloaded images to volume
	W1101 00:45:35.415322 1231442 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 00:45:35.415433 1231442 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 00:45:35.481916 1231442 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-992876 --name ingress-addon-legacy-992876 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-992876 --network ingress-addon-legacy-992876 --ip 192.168.49.2 --volume ingress-addon-legacy-992876:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 00:45:35.820391 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Running}}
	I1101 00:45:35.849347 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:45:35.878107 1231442 cli_runner.go:164] Run: docker exec ingress-addon-legacy-992876 stat /var/lib/dpkg/alternatives/iptables
	I1101 00:45:35.945322 1231442 oci.go:144] the created container "ingress-addon-legacy-992876" has a running status.
	I1101 00:45:35.945352 1231442 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa...
	I1101 00:45:36.565627 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 00:45:36.565716 1231442 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 00:45:36.593366 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:45:36.619419 1231442 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 00:45:36.619439 1231442 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-992876 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 00:45:36.713959 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:45:36.752081 1231442 machine.go:88] provisioning docker machine ...
	I1101 00:45:36.752115 1231442 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-992876"
	I1101 00:45:36.752185 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:36.774269 1231442 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:36.774716 1231442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1101 00:45:36.774745 1231442 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-992876 && echo "ingress-addon-legacy-992876" | sudo tee /etc/hostname
	I1101 00:45:36.945421 1231442 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-992876
	
	I1101 00:45:36.945517 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:36.972963 1231442 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:36.973428 1231442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1101 00:45:36.973454 1231442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-992876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-992876/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-992876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 00:45:37.122116 1231442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 00:45:37.122149 1231442 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 00:45:37.122169 1231442 ubuntu.go:177] setting up certificates
	I1101 00:45:37.122178 1231442 provision.go:83] configureAuth start
	I1101 00:45:37.122242 1231442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-992876
	I1101 00:45:37.140872 1231442 provision.go:138] copyHostCerts
	I1101 00:45:37.140921 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 00:45:37.140953 1231442 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 00:45:37.140963 1231442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 00:45:37.141164 1231442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 00:45:37.141267 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 00:45:37.141288 1231442 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 00:45:37.141297 1231442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 00:45:37.141327 1231442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 00:45:37.141375 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 00:45:37.141394 1231442 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 00:45:37.141406 1231442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 00:45:37.141434 1231442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 00:45:37.141543 1231442 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-992876 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-992876]
	I1101 00:45:37.380093 1231442 provision.go:172] copyRemoteCerts
	I1101 00:45:37.380162 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 00:45:37.380212 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:37.398057 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:37.499773 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 00:45:37.499855 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 00:45:37.529417 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 00:45:37.529482 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 00:45:37.557515 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 00:45:37.557576 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1101 00:45:37.585139 1231442 provision.go:86] duration metric: configureAuth took 462.945778ms
	I1101 00:45:37.585170 1231442 ubuntu.go:193] setting minikube options for container-runtime
	I1101 00:45:37.585369 1231442 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:45:37.585477 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:37.602792 1231442 main.go:141] libmachine: Using SSH client type: native
	I1101 00:45:37.603224 1231442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34307 <nil> <nil>}
	I1101 00:45:37.603249 1231442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 00:45:37.884196 1231442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 00:45:37.884259 1231442 machine.go:91] provisioned docker machine in 1.132154245s
	I1101 00:45:37.884283 1231442 client.go:171] LocalClient.Create took 9.035234436s
	I1101 00:45:37.884317 1231442 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-992876" took 9.035298174s
	I1101 00:45:37.884359 1231442 start.go:300] post-start starting for "ingress-addon-legacy-992876" (driver="docker")
	I1101 00:45:37.884385 1231442 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 00:45:37.884498 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 00:45:37.884561 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:37.903555 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.008812 1231442 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 00:45:38.013061 1231442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 00:45:38.013101 1231442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 00:45:38.013113 1231442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 00:45:38.013120 1231442 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 00:45:38.013131 1231442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 00:45:38.013202 1231442 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 00:45:38.013294 1231442 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 00:45:38.013307 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /etc/ssl/certs/12028972.pem
	I1101 00:45:38.013428 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 00:45:38.024691 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 00:45:38.054805 1231442 start.go:303] post-start completed in 170.414318ms
	I1101 00:45:38.055192 1231442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-992876
	I1101 00:45:38.073117 1231442 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/config.json ...
	I1101 00:45:38.073417 1231442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 00:45:38.073479 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:38.090985 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.186916 1231442 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 00:45:38.192657 1231442 start.go:128] duration metric: createHost completed in 9.346453334s
	I1101 00:45:38.192725 1231442 start.go:83] releasing machines lock for "ingress-addon-legacy-992876", held for 9.34661559s
	I1101 00:45:38.192804 1231442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-992876
	I1101 00:45:38.210575 1231442 ssh_runner.go:195] Run: cat /version.json
	I1101 00:45:38.210632 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:38.210881 1231442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 00:45:38.210943 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:45:38.235030 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.242515 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:45:38.333318 1231442 ssh_runner.go:195] Run: systemctl --version
	I1101 00:45:38.475423 1231442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 00:45:38.625129 1231442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 00:45:38.630940 1231442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:38.655546 1231442 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 00:45:38.655624 1231442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 00:45:38.691824 1231442 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 00:45:38.691849 1231442 start.go:472] detecting cgroup driver to use...
	I1101 00:45:38.691883 1231442 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 00:45:38.691936 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 00:45:38.711143 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 00:45:38.724574 1231442 docker.go:204] disabling cri-docker service (if available) ...
	I1101 00:45:38.724677 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 00:45:38.742172 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 00:45:38.759043 1231442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 00:45:38.859691 1231442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 00:45:38.958216 1231442 docker.go:220] disabling docker service ...
	I1101 00:45:38.958286 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 00:45:38.979675 1231442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 00:45:38.993147 1231442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 00:45:39.103544 1231442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 00:45:39.203045 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 00:45:39.216198 1231442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 00:45:39.235135 1231442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1101 00:45:39.235265 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.247504 1231442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 00:45:39.247619 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.259363 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.270923 1231442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 00:45:39.283515 1231442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 00:45:39.294654 1231442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 00:45:39.304825 1231442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 00:45:39.315581 1231442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 00:45:39.423102 1231442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 00:45:39.552255 1231442 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 00:45:39.552325 1231442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 00:45:39.557175 1231442 start.go:540] Will wait 60s for crictl version
	I1101 00:45:39.557251 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:39.561756 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 00:45:39.612632 1231442 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 00:45:39.612719 1231442 ssh_runner.go:195] Run: crio --version
	I1101 00:45:39.655103 1231442 ssh_runner.go:195] Run: crio --version
	I1101 00:45:39.701819 1231442 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1101 00:45:39.703705 1231442 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-992876 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 00:45:39.721070 1231442 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 00:45:39.725707 1231442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:45:39.738856 1231442 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1101 00:45:39.738927 1231442 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:45:39.791263 1231442 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1101 00:45:39.791355 1231442 ssh_runner.go:195] Run: which lz4
	I1101 00:45:39.795959 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1101 00:45:39.796062 1231442 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1101 00:45:39.800196 1231442 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 00:45:39.800231 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1101 00:45:41.996164 1231442 crio.go:444] Took 2.200137 seconds to copy over tarball
	I1101 00:45:41.996241 1231442 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 00:45:44.722159 1231442 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.725873723s)
	I1101 00:45:44.722190 1231442 crio.go:451] Took 2.726005 seconds to extract the tarball
	I1101 00:45:44.722201 1231442 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 00:45:44.938481 1231442 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 00:45:44.978699 1231442 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1101 00:45:44.978722 1231442 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1101 00:45:44.978789 1231442 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:44.978791 1231442 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:44.978978 1231442 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:44.978983 1231442 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1101 00:45:44.979059 1231442 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:44.979070 1231442 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:44.979126 1231442 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:44.979136 1231442 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1101 00:45:44.980283 1231442 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:44.980745 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:44.981069 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:44.981112 1231442 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:44.981158 1231442 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1101 00:45:44.981238 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:44.981303 1231442 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:44.981066 1231442 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W1101 00:45:45.324221 1231442 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.324596 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1101 00:45:45.354188 1231442 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.354382 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1101 00:45:45.364105 1231442 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.364272 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:45.364610 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1101 00:45:45.375713 1231442 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.375940 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:45.388447 1231442 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1101 00:45:45.388517 1231442 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:45.388569 1231442 ssh_runner.go:195] Run: which crictl
	W1101 00:45:45.400499 1231442 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.400674 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1101 00:45:45.404590 1231442 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.404774 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1101 00:45:45.469470 1231442 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1101 00:45:45.469525 1231442 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:45.469578 1231442 ssh_runner.go:195] Run: which crictl
	W1101 00:45:45.526748 1231442 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1101 00:45:45.526931 1231442 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:45.557441 1231442 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1101 00:45:45.557610 1231442 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1101 00:45:45.557533 1231442 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1101 00:45:45.557662 1231442 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:45.557710 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.557804 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.596567 1231442 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1101 00:45:45.596603 1231442 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:45.596650 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.596730 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1101 00:45:45.596799 1231442 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1101 00:45:45.596815 1231442 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:45.596837 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.596899 1231442 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1101 00:45:45.596912 1231442 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1101 00:45:45.596930 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.597004 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1101 00:45:45.726183 1231442 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1101 00:45:45.726271 1231442 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:45.726343 1231442 ssh_runner.go:195] Run: which crictl
	I1101 00:45:45.726438 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1101 00:45:45.726511 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1101 00:45:45.726623 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1101 00:45:45.726663 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1101 00:45:45.726734 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1101 00:45:45.726792 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1101 00:45:45.726850 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1101 00:45:45.752473 1231442 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:45:45.891081 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1101 00:45:45.891215 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1101 00:45:45.891241 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1101 00:45:45.891300 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1101 00:45:45.891370 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1101 00:45:45.908275 1231442 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1101 00:45:45.908379 1231442 cache_images.go:92] LoadImages completed in 929.642429ms
	W1101 00:45:45.908467 1231442 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1101 00:45:45.908565 1231442 ssh_runner.go:195] Run: crio config
	I1101 00:45:45.966955 1231442 cni.go:84] Creating CNI manager for ""
	I1101 00:45:45.967023 1231442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:45:45.967073 1231442 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 00:45:45.967116 1231442 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-992876 NodeName:ingress-addon-legacy-992876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1101 00:45:45.967321 1231442 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-992876"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 00:45:45.967444 1231442 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-992876 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 00:45:45.967550 1231442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1101 00:45:45.978037 1231442 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 00:45:45.978136 1231442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 00:45:45.988790 1231442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1101 00:45:46.011544 1231442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1101 00:45:46.033775 1231442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1101 00:45:46.054982 1231442 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 00:45:46.059661 1231442 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 00:45:46.073091 1231442 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876 for IP: 192.168.49.2
	I1101 00:45:46.073121 1231442 certs.go:190] acquiring lock for shared ca certs: {Name:mk19a54d78f5cf4996fdfc5da5ee5226ef1f844f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.073252 1231442 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key
	I1101 00:45:46.073296 1231442 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key
	I1101 00:45:46.073347 1231442 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key
	I1101 00:45:46.073362 1231442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt with IP's: []
	I1101 00:45:46.306794 1231442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt ...
	I1101 00:45:46.306826 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: {Name:mk875a1d5c7486c9a5ed1078452ffb0a1ffb5ae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.307030 1231442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key ...
	I1101 00:45:46.307050 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key: {Name:mk3fd496714d5fd899c9e37395177b9cc2d941e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.307148 1231442 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2
	I1101 00:45:46.307170 1231442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 00:45:46.588347 1231442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2 ...
	I1101 00:45:46.588377 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2: {Name:mkffe2c3cee48d112aec67d7d22d7663057bc731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.588582 1231442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2 ...
	I1101 00:45:46.588598 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2: {Name:mk5925014c6fbae288bd7a39d7b4bd81834fdf97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:46.588678 1231442 certs.go:337] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt
	I1101 00:45:46.588756 1231442 certs.go:341] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key
	I1101 00:45:46.588813 1231442 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key
	I1101 00:45:46.588832 1231442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt with IP's: []
	I1101 00:45:47.215772 1231442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt ...
	I1101 00:45:47.215806 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt: {Name:mk10cdd726b2e34709ae05f8fec8af4919dd360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:47.216005 1231442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key ...
	I1101 00:45:47.216019 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key: {Name:mk419d291d90ff351ec65e5f8058266b4b67400b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:45:47.216103 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 00:45:47.216127 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 00:45:47.216145 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 00:45:47.216161 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 00:45:47.216172 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 00:45:47.216191 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 00:45:47.216207 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 00:45:47.216241 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 00:45:47.216317 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem (1338 bytes)
	W1101 00:45:47.216355 1231442 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897_empty.pem, impossibly tiny 0 bytes
	I1101 00:45:47.216369 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 00:45:47.216399 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem (1082 bytes)
	I1101 00:45:47.216426 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem (1123 bytes)
	I1101 00:45:47.216460 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem (1675 bytes)
	I1101 00:45:47.216509 1231442 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 00:45:47.216549 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem -> /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.216567 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.216583 1231442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.217190 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 00:45:47.244847 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 00:45:47.273068 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 00:45:47.301478 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 00:45:47.329959 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 00:45:47.358317 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 00:45:47.386085 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 00:45:47.414480 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 00:45:47.442608 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem --> /usr/share/ca-certificates/1202897.pem (1338 bytes)
	I1101 00:45:47.470908 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /usr/share/ca-certificates/12028972.pem (1708 bytes)
	I1101 00:45:47.499270 1231442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 00:45:47.528038 1231442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 00:45:47.548863 1231442 ssh_runner.go:195] Run: openssl version
	I1101 00:45:47.555834 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1202897.pem && ln -fs /usr/share/ca-certificates/1202897.pem /etc/ssl/certs/1202897.pem"
	I1101 00:45:47.567488 1231442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.571950 1231442 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  1 00:39 /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.572014 1231442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1202897.pem
	I1101 00:45:47.580487 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1202897.pem /etc/ssl/certs/51391683.0"
	I1101 00:45:47.592157 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12028972.pem && ln -fs /usr/share/ca-certificates/12028972.pem /etc/ssl/certs/12028972.pem"
	I1101 00:45:47.603824 1231442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.608493 1231442 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  1 00:39 /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.608561 1231442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12028972.pem
	I1101 00:45:47.617510 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12028972.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 00:45:47.629032 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 00:45:47.640301 1231442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.645223 1231442 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.645314 1231442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 00:45:47.653828 1231442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 00:45:47.665081 1231442 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 00:45:47.669328 1231442 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 00:45:47.669424 1231442 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-992876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-992876 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:45:47.669510 1231442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 00:45:47.669568 1231442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 00:45:47.710833 1231442 cri.go:89] found id: ""
	I1101 00:45:47.710903 1231442 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 00:45:47.721384 1231442 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 00:45:47.731820 1231442 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1101 00:45:47.731941 1231442 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 00:45:47.742548 1231442 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 00:45:47.742589 1231442 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 00:45:47.798714 1231442 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1101 00:45:47.799108 1231442 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 00:45:47.848764 1231442 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1101 00:45:47.848865 1231442 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1101 00:45:47.848925 1231442 kubeadm.go:322] OS: Linux
	I1101 00:45:47.849013 1231442 kubeadm.go:322] CGROUPS_CPU: enabled
	I1101 00:45:47.849092 1231442 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1101 00:45:47.849167 1231442 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1101 00:45:47.849232 1231442 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1101 00:45:47.849310 1231442 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1101 00:45:47.849426 1231442 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1101 00:45:47.942982 1231442 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 00:45:47.943146 1231442 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 00:45:47.943277 1231442 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 00:45:48.192726 1231442 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 00:45:48.194217 1231442 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 00:45:48.194495 1231442 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 00:45:48.301465 1231442 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 00:45:48.304764 1231442 out.go:204]   - Generating certificates and keys ...
	I1101 00:45:48.304888 1231442 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 00:45:48.305010 1231442 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 00:45:48.804619 1231442 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 00:45:49.365237 1231442 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 00:45:49.846317 1231442 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 00:45:50.545660 1231442 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 00:45:51.220097 1231442 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 00:45:51.220492 1231442 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-992876 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 00:45:51.755362 1231442 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 00:45:51.755774 1231442 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-992876 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 00:45:52.572807 1231442 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 00:45:52.854602 1231442 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 00:45:53.285504 1231442 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 00:45:53.285830 1231442 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 00:45:53.788719 1231442 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 00:45:54.670535 1231442 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 00:45:55.136813 1231442 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 00:45:55.607317 1231442 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 00:45:55.608353 1231442 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 00:45:55.610886 1231442 out.go:204]   - Booting up control plane ...
	I1101 00:45:55.611004 1231442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 00:45:55.623188 1231442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 00:45:55.623277 1231442 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 00:45:55.623371 1231442 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 00:45:55.623559 1231442 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 00:46:07.624760 1231442 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.002085 seconds
	I1101 00:46:07.624876 1231442 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 00:46:07.642733 1231442 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 00:46:08.160678 1231442 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 00:46:08.160826 1231442 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-992876 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1101 00:46:08.670863 1231442 kubeadm.go:322] [bootstrap-token] Using token: js3x75.dl52zft1ly2rea4m
	I1101 00:46:08.672909 1231442 out.go:204]   - Configuring RBAC rules ...
	I1101 00:46:08.673052 1231442 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 00:46:08.677272 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 00:46:08.684464 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 00:46:08.686980 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 00:46:08.689528 1231442 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 00:46:08.692966 1231442 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 00:46:08.700857 1231442 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 00:46:08.977880 1231442 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 00:46:09.090547 1231442 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 00:46:09.094730 1231442 kubeadm.go:322] 
	I1101 00:46:09.094802 1231442 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 00:46:09.094808 1231442 kubeadm.go:322] 
	I1101 00:46:09.094880 1231442 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 00:46:09.094895 1231442 kubeadm.go:322] 
	I1101 00:46:09.094919 1231442 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 00:46:09.094974 1231442 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 00:46:09.095021 1231442 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 00:46:09.095026 1231442 kubeadm.go:322] 
	I1101 00:46:09.095075 1231442 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 00:46:09.095145 1231442 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 00:46:09.095208 1231442 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 00:46:09.095213 1231442 kubeadm.go:322] 
	I1101 00:46:09.095291 1231442 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 00:46:09.095363 1231442 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 00:46:09.095379 1231442 kubeadm.go:322] 
	I1101 00:46:09.095457 1231442 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token js3x75.dl52zft1ly2rea4m \
	I1101 00:46:09.095556 1231442 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 \
	I1101 00:46:09.095578 1231442 kubeadm.go:322]     --control-plane 
	I1101 00:46:09.095583 1231442 kubeadm.go:322] 
	I1101 00:46:09.095661 1231442 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 00:46:09.095666 1231442 kubeadm.go:322] 
	I1101 00:46:09.095742 1231442 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token js3x75.dl52zft1ly2rea4m \
	I1101 00:46:09.095852 1231442 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 
	I1101 00:46:09.099242 1231442 kubeadm.go:322] W1101 00:45:47.797819    1233 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1101 00:46:09.099466 1231442 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1101 00:46:09.099573 1231442 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 00:46:09.099698 1231442 kubeadm.go:322] W1101 00:45:55.618121    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 00:46:09.099823 1231442 kubeadm.go:322] W1101 00:45:55.619306    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1101 00:46:09.099841 1231442 cni.go:84] Creating CNI manager for ""
	I1101 00:46:09.099849 1231442 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:46:09.102282 1231442 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 00:46:09.104070 1231442 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 00:46:09.109092 1231442 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1101 00:46:09.109116 1231442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 00:46:09.133630 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 00:46:09.548796 1231442 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 00:46:09.548940 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:09.549034 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=ingress-addon-legacy-992876 minikube.k8s.io/updated_at=2023_11_01T00_46_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:09.687609 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:09.687623 1231442 ops.go:34] apiserver oom_adj: -16
	I1101 00:46:09.807030 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:10.402192 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:10.901664 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:11.401949 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:11.901717 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:12.402214 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:12.902529 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:13.401649 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:13.902591 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:14.402220 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:14.902204 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:15.401746 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:15.901685 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:16.402319 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:16.901959 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:17.402241 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:17.901650 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:18.402204 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:18.902429 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:19.401790 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:19.902168 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:20.402323 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:20.902615 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:21.402126 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:21.902662 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:22.402599 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:22.902362 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:23.401685 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:23.902165 1231442 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 00:46:24.054520 1231442 kubeadm.go:1081] duration metric: took 14.505631799s to wait for elevateKubeSystemPrivileges.
	I1101 00:46:24.054550 1231442 kubeadm.go:406] StartCluster complete in 36.385130744s
	I1101 00:46:24.054576 1231442 settings.go:142] acquiring lock: {Name:mke36bce3f316e572c27d9ade5690ad307116f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:24.054637 1231442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:46:24.055354 1231442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/kubeconfig: {Name:mk54047efde1577abb33547e94416477b8fd3071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 00:46:24.056085 1231442 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:24.057312 1231442 config.go:182] Loaded profile config "ingress-addon-legacy-992876": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1101 00:46:24.057394 1231442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 00:46:24.057537 1231442 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 00:46:24.057633 1231442 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-992876"
	I1101 00:46:24.057652 1231442 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-992876"
	I1101 00:46:24.057709 1231442 host.go:66] Checking if "ingress-addon-legacy-992876" exists ...
	I1101 00:46:24.058197 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:24.058855 1231442 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 00:46:24.059340 1231442 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-992876"
	I1101 00:46:24.059358 1231442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-992876"
	I1101 00:46:24.059656 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:24.111731 1231442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 00:46:24.114247 1231442 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:46:24.114266 1231442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 00:46:24.114328 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:46:24.112485 1231442 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:24.114759 1231442 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-992876"
	I1101 00:46:24.114788 1231442 host.go:66] Checking if "ingress-addon-legacy-992876" exists ...
	I1101 00:46:24.115243 1231442 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-992876 --format={{.State.Status}}
	I1101 00:46:24.148132 1231442 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-992876" context rescaled to 1 replicas
	I1101 00:46:24.148172 1231442 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 00:46:24.151741 1231442 out.go:177] * Verifying Kubernetes components...
	I1101 00:46:24.153504 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:46:24.171908 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:46:24.181025 1231442 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 00:46:24.181045 1231442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 00:46:24.181106 1231442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-992876
	I1101 00:46:24.223709 1231442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34307 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/ingress-addon-legacy-992876/id_rsa Username:docker}
	I1101 00:46:24.295527 1231442 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 00:46:24.296319 1231442 kapi.go:59] client config for ingress-addon-legacy-992876: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 00:46:24.296858 1231442 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-992876" to be "Ready" ...
	I1101 00:46:24.365071 1231442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 00:46:24.426344 1231442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 00:46:24.776806 1231442 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 00:46:24.877572 1231442 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 00:46:24.879094 1231442 addons.go:502] enable addons completed in 821.549792ms: enabled=[storage-provisioner default-storageclass]
	I1101 00:46:26.379543 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:28.875582 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:30.875959 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:33.376065 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:35.376428 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:37.876364 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:40.376351 1231442 node_ready.go:58] node "ingress-addon-legacy-992876" has status "Ready":"False"
	I1101 00:46:42.876688 1231442 node_ready.go:49] node "ingress-addon-legacy-992876" has status "Ready":"True"
	I1101 00:46:42.876717 1231442 node_ready.go:38] duration metric: took 18.579810147s waiting for node "ingress-addon-legacy-992876" to be "Ready" ...
	I1101 00:46:42.876729 1231442 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:46:42.885565 1231442 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-447wp" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:44.896047 1231442 pod_ready.go:102] pod "coredns-66bff467f8-447wp" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-01 00:46:24 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1101 00:46:47.398404 1231442 pod_ready.go:102] pod "coredns-66bff467f8-447wp" in "kube-system" namespace has status "Ready":"False"
	I1101 00:46:49.900110 1231442 pod_ready.go:102] pod "coredns-66bff467f8-447wp" in "kube-system" namespace has status "Ready":"False"
	I1101 00:46:50.397766 1231442 pod_ready.go:92] pod "coredns-66bff467f8-447wp" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.397792 1231442 pod_ready.go:81] duration metric: took 7.512147824s waiting for pod "coredns-66bff467f8-447wp" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.397807 1231442 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.402363 1231442 pod_ready.go:92] pod "etcd-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.402387 1231442 pod_ready.go:81] duration metric: took 4.573071ms waiting for pod "etcd-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.402402 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.406768 1231442 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.406793 1231442 pod_ready.go:81] duration metric: took 4.383386ms waiting for pod "kube-apiserver-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.406805 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.411428 1231442 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.411453 1231442 pod_ready.go:81] duration metric: took 4.639859ms waiting for pod "kube-controller-manager-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.411464 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qxwkc" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.416128 1231442 pod_ready.go:92] pod "kube-proxy-qxwkc" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.416155 1231442 pod_ready.go:81] duration metric: took 4.683946ms waiting for pod "kube-proxy-qxwkc" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.416166 1231442 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.593537 1231442 request.go:629] Waited for 177.31245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-992876
	I1101 00:46:50.793539 1231442 request.go:629] Waited for 197.35082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-992876
	I1101 00:46:50.796140 1231442 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-992876" in "kube-system" namespace has status "Ready":"True"
	I1101 00:46:50.796166 1231442 pod_ready.go:81] duration metric: took 379.992773ms waiting for pod "kube-scheduler-ingress-addon-legacy-992876" in "kube-system" namespace to be "Ready" ...
	I1101 00:46:50.796180 1231442 pod_ready.go:38] duration metric: took 7.919438827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 00:46:50.796205 1231442 api_server.go:52] waiting for apiserver process to appear ...
	I1101 00:46:50.796271 1231442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 00:46:50.809038 1231442 api_server.go:72] duration metric: took 26.66083189s to wait for apiserver process to appear ...
	I1101 00:46:50.809063 1231442 api_server.go:88] waiting for apiserver healthz status ...
	I1101 00:46:50.809079 1231442 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 00:46:50.817782 1231442 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 00:46:50.818762 1231442 api_server.go:141] control plane version: v1.18.20
	I1101 00:46:50.818788 1231442 api_server.go:131] duration metric: took 9.717206ms to wait for apiserver health ...
	I1101 00:46:50.818797 1231442 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 00:46:50.993198 1231442 request.go:629] Waited for 174.307351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1101 00:46:50.999133 1231442 system_pods.go:59] 8 kube-system pods found
	I1101 00:46:50.999178 1231442 system_pods.go:61] "coredns-66bff467f8-447wp" [bd34668c-987e-41fe-8236-9e2c434eee33] Running
	I1101 00:46:50.999185 1231442 system_pods.go:61] "etcd-ingress-addon-legacy-992876" [beec4855-e8f5-4625-a517-ca298207b5b9] Running
	I1101 00:46:50.999192 1231442 system_pods.go:61] "kindnet-d4npj" [14459195-556d-40fb-a096-0a434c3c0177] Running
	I1101 00:46:50.999197 1231442 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-992876" [7d8f1186-f057-4e39-9cc0-0b276174d187] Running
	I1101 00:46:50.999203 1231442 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-992876" [f3b92fad-1e62-4f94-9a38-95e6157de794] Running
	I1101 00:46:50.999208 1231442 system_pods.go:61] "kube-proxy-qxwkc" [f519b66a-24e3-4796-bbab-a043a2e7104f] Running
	I1101 00:46:50.999213 1231442 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-992876" [b9a94eb0-3306-4928-8228-2ed84b0f7dd1] Running
	I1101 00:46:50.999221 1231442 system_pods.go:61] "storage-provisioner" [b090f608-18cc-4c75-b85f-08c99204530c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:46:50.999229 1231442 system_pods.go:74] duration metric: took 180.424263ms to wait for pod list to return data ...
	I1101 00:46:50.999238 1231442 default_sa.go:34] waiting for default service account to be created ...
	I1101 00:46:51.193601 1231442 request.go:629] Waited for 194.272475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1101 00:46:51.196243 1231442 default_sa.go:45] found service account: "default"
	I1101 00:46:51.196268 1231442 default_sa.go:55] duration metric: took 197.0237ms for default service account to be created ...
	I1101 00:46:51.196288 1231442 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 00:46:51.393619 1231442 request.go:629] Waited for 197.256855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1101 00:46:51.400830 1231442 system_pods.go:86] 8 kube-system pods found
	I1101 00:46:51.400861 1231442 system_pods.go:89] "coredns-66bff467f8-447wp" [bd34668c-987e-41fe-8236-9e2c434eee33] Running
	I1101 00:46:51.400869 1231442 system_pods.go:89] "etcd-ingress-addon-legacy-992876" [beec4855-e8f5-4625-a517-ca298207b5b9] Running
	I1101 00:46:51.400875 1231442 system_pods.go:89] "kindnet-d4npj" [14459195-556d-40fb-a096-0a434c3c0177] Running
	I1101 00:46:51.400889 1231442 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-992876" [7d8f1186-f057-4e39-9cc0-0b276174d187] Running
	I1101 00:46:51.400899 1231442 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-992876" [f3b92fad-1e62-4f94-9a38-95e6157de794] Running
	I1101 00:46:51.400904 1231442 system_pods.go:89] "kube-proxy-qxwkc" [f519b66a-24e3-4796-bbab-a043a2e7104f] Running
	I1101 00:46:51.400917 1231442 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-992876" [b9a94eb0-3306-4928-8228-2ed84b0f7dd1] Running
	I1101 00:46:51.400930 1231442 system_pods.go:89] "storage-provisioner" [b090f608-18cc-4c75-b85f-08c99204530c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 00:46:51.400943 1231442 system_pods.go:126] duration metric: took 204.648128ms to wait for k8s-apps to be running ...
	I1101 00:46:51.400951 1231442 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 00:46:51.401028 1231442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 00:46:51.426949 1231442 system_svc.go:56] duration metric: took 25.985468ms WaitForService to wait for kubelet.
	I1101 00:46:51.426996 1231442 kubeadm.go:581] duration metric: took 27.278782752s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 00:46:51.427020 1231442 node_conditions.go:102] verifying NodePressure condition ...
	I1101 00:46:51.593384 1231442 request.go:629] Waited for 166.276969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1101 00:46:51.596917 1231442 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 00:46:51.596949 1231442 node_conditions.go:123] node cpu capacity is 2
	I1101 00:46:51.596962 1231442 node_conditions.go:105] duration metric: took 169.936315ms to run NodePressure ...
	I1101 00:46:51.596974 1231442 start.go:228] waiting for startup goroutines ...
	I1101 00:46:51.597004 1231442 start.go:233] waiting for cluster config update ...
	I1101 00:46:51.597015 1231442 start.go:242] writing updated cluster config ...
	I1101 00:46:51.597320 1231442 ssh_runner.go:195] Run: rm -f paused
	I1101 00:46:51.683524 1231442 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1101 00:46:51.686138 1231442 out.go:177] 
	W1101 00:46:51.688521 1231442 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1101 00:46:51.690392 1231442 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1101 00:46:51.692271 1231442 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-992876" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 01 00:53:10 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:10.390020425Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=e0589bcf-cec4-4f22-b96c-414b7a7d3575 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:18 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:18.390036393Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=97b79bc7-0ec9-4e21-bb7c-33f5a9305f64 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:18 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:18.390332179Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=97b79bc7-0ec9-4e21-bb7c-33f5a9305f64 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:21 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:21.390132778Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=5468ca68-4429-49d8-8bf3-9211e465d314 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:30 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:30.389990632Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8d1b9f58-4347-4795-ba68-321b5297bee5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:30 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:30.390307250Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=8d1b9f58-4347-4795-ba68-321b5297bee5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:32 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:32.390082897Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=66f1d241-95ad-4b1f-90f7-5b8e0bb4d0ec name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:36 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:36.389972849Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=c07fdd99-c07c-42c0-b408-330ca843921b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:36 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:36.390244765Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=c07fdd99-c07c-42c0-b408-330ca843921b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:41 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:41.389949900Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ead8d4e0-d3eb-4d3f-bcbc-74b0af241104 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:41 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:41.390254145Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ead8d4e0-d3eb-4d3f-bcbc-74b0af241104 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:41 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:41.390925371Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=7da239a4-cce5-4e42-a8e3-e35650e0f8bd name=/runtime.v1alpha2.ImageService/PullImage
	Nov 01 00:53:41 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:41.393203982Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:53:47 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:47.389877573Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=a07c322f-7e30-4c02-9663-0fc36544565b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:51 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:51.389892349Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=5883c8f1-d831-465d-a57f-d4550a412678 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:51 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:51.390164602Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=5883c8f1-d831-465d-a57f-d4550a412678 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:53:58 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:53:58.390085199Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=c36b6cad-2fbc-4789-aab3-896e642cbeee name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:06 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:06.389998830Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=b20b128b-d30b-4fa2-8ff2-212d8dfd9183 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:06 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:06.390272371Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b20b128b-d30b-4fa2-8ff2-212d8dfd9183 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:10 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:10.389867486Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=521be45a-45ad-4076-b204-01ee0419fc77 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:19 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:19.389988051Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=7aecf8d5-d075-4b46-b7bb-aac26dbdeefb name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:19 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:19.390271217Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=7aecf8d5-d075-4b46-b7bb-aac26dbdeefb name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:24 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:24.389935358Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=4d75efca-9b27-48c3-bfc0-36dde49760ef name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:26 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:26.390153775Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=3291e375-5efb-46aa-8b9d-95c4d268c6f4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 01 00:54:26 ingress-addon-legacy-992876 crio[901]: time="2023-11-01 00:54:26.390428580Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=3291e375-5efb-46aa-8b9d-95c4d268c6f4 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b41f897ebff0       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   a6c9d905b3c1d       storage-provisioner
	0df625f0dfda5       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   7dda86566ca3c       coredns-66bff467f8-447wp
	2e16f8346f39e       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                8 minutes ago       Running             kindnet-cni               0                   d70f9e4820a4e       kindnet-d4npj
	1e7d915d10b43       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  8 minutes ago       Running             kube-proxy                0                   0ab78bb81fc11       kube-proxy-qxwkc
	39f31514e884c       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   541d6044ef64c       kube-controller-manager-ingress-addon-legacy-992876
	8ad01671a57d6       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   e1b9ca063c0a5       kube-scheduler-ingress-addon-legacy-992876
	8e4ec398cc7c4       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   987bd08ca5697       etcd-ingress-addon-legacy-992876
	a0d57dc63c1b3       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   15fb43877ef74       kube-apiserver-ingress-addon-legacy-992876
	
	* 
	* ==> coredns [0df625f0dfda532c66e4a68dee83b44e5e21939940390f87095c61dc7d190972] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:59674 - 51483 "HINFO IN 6518257975335028987.8642677448013009581. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01330515s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-992876
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-992876
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=ingress-addon-legacy-992876
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T00_46_09_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 00:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-992876
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 00:54:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 00:52:12 +0000   Wed, 01 Nov 2023 00:46:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-992876
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 deb9daf9c2264630b846097fd1294d82
	  System UUID:                87f616e1-2ec9-4616-b8fd-46b18f0be87b
	  Boot ID:                    11045d5e-2454-4ceb-8984-3078b90f4cad
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-xsccv                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-admission-patch-6k5st                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-cqvqs              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m35s
	  kube-system                 coredns-66bff467f8-447wp                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m4s
	  kube-system                 etcd-ingress-addon-legacy-992876                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-d4npj                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-ingress-addon-legacy-992876             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-992876    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-qxwkc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-ingress-addon-legacy-992876             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m29s (x4 over 8m29s)  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m29s (x4 over 8m29s)  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s (x4 over 8m29s)  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m45s                  kubelet     Node ingress-addon-legacy-992876 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000767] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001063] FS-Cache: N-key=[8] '70643b0000000000'
	[  +0.004430] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000527cc4c3
	[  +0.001080] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000008a5a3042
	[  +0.001070] FS-Cache: N-key=[8] '70643b0000000000'
	[  +2.029136] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001008] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000d9fe484b
	[  +0.001140] FS-Cache: O-key=[8] '6f643b0000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001074] FS-Cache: N-key=[8] '6f643b0000000000'
	[  +0.310063] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=000000005bafb08b
	[  +0.001102] FS-Cache: O-key=[8] '75643b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=00000000763bdf7d
	[  +0.001071] FS-Cache: N-key=[8] '75643b0000000000'
	
	* 
	* ==> etcd [8e4ec398cc7c440723355258d2257fb31018527d58de0b9fe1726bee93c8e919] <==
	* raft2023/11/01 00:46:01 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/01 00:46:01 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/01 00:46:01 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-01 00:46:01.201100 W | auth: simple token is not cryptographically signed
	2023-11-01 00:46:01.208147 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-01 00:46:01.210202 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-01 00:46:01.210359 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-01 00:46:01.210580 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-01 00:46:01.211055 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/01 00:46:01 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-01 00:46:01.211320 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/01 00:46:02 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/01 00:46:02 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-01 00:46:02.160906 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-01 00:46:02.161621 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-01 00:46:02.161708 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-01 00:46:02.161765 I | etcdserver: published {Name:ingress-addon-legacy-992876 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-01 00:46:02.161882 I | embed: ready to serve client requests
	2023-11-01 00:46:02.163481 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-01 00:46:02.171187 I | embed: ready to serve client requests
	2023-11-01 00:46:02.172370 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-01 00:46:24.612977 W | etcdserver: request "header:<ID:8128024845207824250 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/ingress-addon-legacy-992876.17935938ac7d0358\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/ingress-addon-legacy-992876.17935938ac7d0358\" value_size:668 lease:8128024845207823848 >> failure:<>>" with result "size:16" took too long (136.143806ms) to execute
	
	* 
	* ==> kernel <==
	*  00:54:27 up  8:36,  0 users,  load average: 0.12, 0.45, 1.08
	Linux ingress-addon-legacy-992876 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2e16f8346f39ed52a398d8e097d7ebf925359e814b166ef78cd955db0342e7de] <==
	* I1101 00:52:17.972817       1 main.go:227] handling current node
	I1101 00:52:27.983196       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:27.983338       1 main.go:227] handling current node
	I1101 00:52:37.993234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:37.993263       1 main.go:227] handling current node
	I1101 00:52:47.996709       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:47.996738       1 main.go:227] handling current node
	I1101 00:52:58.000152       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:52:58.000187       1 main.go:227] handling current node
	I1101 00:53:08.010894       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:53:08.010971       1 main.go:227] handling current node
	I1101 00:53:18.022853       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:53:18.022883       1 main.go:227] handling current node
	I1101 00:53:28.029581       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:53:28.029613       1 main.go:227] handling current node
	I1101 00:53:38.040276       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:53:38.040305       1 main.go:227] handling current node
	I1101 00:53:48.047975       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:53:48.048003       1 main.go:227] handling current node
	I1101 00:53:58.055771       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:53:58.055798       1 main.go:227] handling current node
	I1101 00:54:08.065927       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:54:08.065956       1 main.go:227] handling current node
	I1101 00:54:18.069200       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1101 00:54:18.069230       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a0d57dc63c1b30a6631517dc123efc0e2c011f483027961c382ba3898f284dc7] <==
	* I1101 00:46:06.128604       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I1101 00:46:06.129128       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1101 00:46:06.129137       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1101 00:46:06.252384       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 00:46:06.252471       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 00:46:06.256533       1 cache.go:39] Caches are synced for autoregister controller
	I1101 00:46:06.275156       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1101 00:46:06.343839       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1101 00:46:07.042294       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1101 00:46:07.042427       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1101 00:46:07.055698       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1101 00:46:07.059587       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1101 00:46:07.059608       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1101 00:46:07.451960       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 00:46:07.491101       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1101 00:46:07.630202       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 00:46:07.631168       1 controller.go:609] quota admission added evaluator for: endpoints
	I1101 00:46:07.636820       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 00:46:08.423092       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1101 00:46:08.950649       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1101 00:46:09.079020       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1101 00:46:12.321989       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 00:46:23.885108       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1101 00:46:24.453509       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1101 00:46:52.588524       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [39f31514e884c9b8f272276929f504ba3213d6d23db1f14f6682e6ad5a8b5f12] <==
	* I1101 00:46:24.154532       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a1c89c3e-bdf9-4185-90a9-53fb37d1fd7a", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1101 00:46:24.231592       1 shared_informer.go:230] Caches are synced for attach detach 
	I1101 00:46:24.249384       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"319e336a-6f08-49bf-9df9-4b34bafabe84", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-b69k9
	I1101 00:46:24.368892       1 shared_informer.go:230] Caches are synced for HPA 
	I1101 00:46:24.395583       1 shared_informer.go:230] Caches are synced for taint 
	I1101 00:46:24.395772       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1101 00:46:24.395854       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-992876. Assuming now as a timestamp.
	I1101 00:46:24.395925       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1101 00:46:24.396281       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1101 00:46:24.397485       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-992876", UID:"2c8b9661-d643-497a-9b42-94d8da4503ba", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-992876 event: Registered Node ingress-addon-legacy-992876 in Controller
	I1101 00:46:24.421602       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1101 00:46:24.445266       1 shared_informer.go:230] Caches are synced for resource quota 
	I1101 00:46:24.445495       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1101 00:46:24.445593       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1101 00:46:24.482241       1 shared_informer.go:230] Caches are synced for resource quota 
	I1101 00:46:24.482383       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1101 00:46:24.708328       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d30a1030-3d5c-4d82-a3b1-451858b49c94", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-qxwkc
	I1101 00:46:24.739607       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a4504450-622a-4a0e-bfb4-8e77219eb7ce", APIVersion:"apps/v1", ResourceVersion:"229", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-d4npj
	E1101 00:46:24.822618       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"a4504450-622a-4a0e-bfb4-8e77219eb7ce", ResourceVersion:"229", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63834396369, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001486cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001486ce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001486d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001486d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001486d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001486d60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001486d80)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001486dc0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000882aa0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400036be98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40005003f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e600)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400036bee0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1101 00:46:44.396861       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1101 00:46:52.580338       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"91ae13a1-44b8-4e10-b1ed-a96c04c9f131", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1101 00:46:52.603434       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"9921b77c-8a0c-46b4-a428-76b1cb6477c6", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-xsccv
	I1101 00:46:52.605925       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"39fc3162-e643-439a-b842-a981e2da17d8", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-cqvqs
	I1101 00:46:52.662556       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e31ec691-8dee-48dc-85c7-22477f81feb9", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-6k5st
	
	* 
	* ==> kube-proxy [1e7d915d10b4335ca8c7efe7544aad48640013406847fa709d78e2c6a1b9bceb] <==
	* W1101 00:46:25.287939       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1101 00:46:25.299350       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1101 00:46:25.299400       1 server_others.go:186] Using iptables Proxier.
	I1101 00:46:25.299745       1 server.go:583] Version: v1.18.20
	I1101 00:46:25.306461       1 config.go:315] Starting service config controller
	I1101 00:46:25.306495       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1101 00:46:25.306557       1 config.go:133] Starting endpoints config controller
	I1101 00:46:25.306569       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1101 00:46:25.406666       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1101 00:46:25.406667       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8ad01671a57d6140cc72551ae79ee5411be2283d0e35a1ad1bd7d76c06951bfd] <==
	* W1101 00:46:06.126196       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 00:46:06.244260       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1101 00:46:06.244365       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1101 00:46:06.247241       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1101 00:46:06.247419       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:46:06.247452       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 00:46:06.247501       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1101 00:46:06.256976       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 00:46:06.281343       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 00:46:06.281558       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:46:06.281709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:46:06.281827       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 00:46:06.281956       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 00:46:06.282091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 00:46:06.282239       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 00:46:06.282376       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:46:06.282484       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1101 00:46:06.282615       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 00:46:06.287799       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 00:46:07.180712       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 00:46:07.207189       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 00:46:07.271625       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1101 00:46:07.294980       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 00:46:07.295523       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1101 00:46:10.347586       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 01 00:53:36 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:36.390663    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:53:47 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:47.390423    1646 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:53:47 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:47.390470    1646 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:53:47 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:47.390519    1646 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:53:47 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:47.390549    1646 pod_workers.go:191] Error syncing pod 598a7549-ae33-4ba7-ae21-bcd3ce3044c8 ("kube-ingress-dns-minikube_kube-system(598a7549-ae33-4ba7-ae21-bcd3ce3044c8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 01 00:53:51 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:51.390823    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:53:58 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:58.390724    1646 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:53:58 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:58.390755    1646 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:53:58 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:58.390796    1646 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:53:58 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:53:58.390824    1646 pod_workers.go:191] Error syncing pod 598a7549-ae33-4ba7-ae21-bcd3ce3044c8 ("kube-ingress-dns-minikube_kube-system(598a7549-ae33-4ba7-ae21-bcd3ce3044c8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 01 00:54:06 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:06.390963    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:54:10 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:10.390507    1646 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:54:10 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:10.390551    1646 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:54:10 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:10.390599    1646 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:54:10 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:10.390631    1646 pod_workers.go:191] Error syncing pod 598a7549-ae33-4ba7-ae21-bcd3ce3044c8 ("kube-ingress-dns-minikube_kube-system(598a7549-ae33-4ba7-ae21-bcd3ce3044c8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 01 00:54:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:11.757154    1646 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:54:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:11.757218    1646 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:54:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:11.757278    1646 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 01 00:54:11 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:11.757312    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 01 00:54:19 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:19.390500    1646 pod_workers.go:191] Error syncing pod b122fad5-0dd8-45bb-9eba-3964acdb48d1 ("ingress-nginx-admission-create-xsccv_ingress-nginx(b122fad5-0dd8-45bb-9eba-3964acdb48d1)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 01 00:54:24 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:24.390342    1646 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:54:24 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:24.390374    1646 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:54:24 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:24.390421    1646 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 01 00:54:24 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:24.390453    1646 pod_workers.go:191] Error syncing pod 598a7549-ae33-4ba7-ae21-bcd3ce3044c8 ("kube-ingress-dns-minikube_kube-system(598a7549-ae33-4ba7-ae21-bcd3ce3044c8)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 01 00:54:26 ingress-addon-legacy-992876 kubelet[1646]: E1101 00:54:26.390830    1646 pod_workers.go:191] Error syncing pod 5798040c-7bf2-43f0-b1b2-75359bbe1b64 ("ingress-nginx-admission-patch-6k5st_ingress-nginx(5798040c-7bf2-43f0-b1b2-75359bbe1b64)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [1b41f897ebff0ca1417054f430679068124ab65e868869438aea1ef994a874da] <==
	* I1101 00:46:51.696853       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 00:46:51.727607       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 00:46:51.727767       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 00:46:51.736033       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 00:46:51.737290       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-992876_226eb49a-00a2-408b-abd8-86b18910b449!
	I1101 00:46:51.740172       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7bfbf796-a01d-48f6-a327-a5426ba3862c", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-992876_226eb49a-00a2-408b-abd8-86b18910b449 became leader
	I1101 00:46:51.837994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-992876_226eb49a-00a2-408b-abd8-86b18910b449!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-992876 -n ingress-addon-legacy-992876
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-992876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-992876 describe pod ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-992876 describe pod ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs kube-ingress-dns-minikube: exit status 1 (87.315662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xsccv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6k5st" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-cqvqs" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-992876 describe pod ingress-nginx-admission-create-xsccv ingress-nginx-admission-patch-6k5st ingress-nginx-controller-7fcf777cb7-cqvqs kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-2p499 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-2p499 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-2p499 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (248.148121ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-2p499): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-7m7pb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-7m7pb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-7m7pb -- sh -c "ping -c 1 192.168.58.1": exit status 1 (261.176205ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-7m7pb): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-291182
helpers_test.go:235: (dbg) docker inspect multinode-291182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59",
	        "Created": "2023-11-01T01:00:52.921397763Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1267411,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T01:00:53.241679334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bd2c3f7c992aecdf624ceae92825f3a10bf56bd552768efdb49aafbacd808193",
	        "ResolvConfPath": "/var/lib/docker/containers/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/hostname",
	        "HostsPath": "/var/lib/docker/containers/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/hosts",
	        "LogPath": "/var/lib/docker/containers/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59-json.log",
	        "Name": "/multinode-291182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-291182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-291182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4e66734a63771fb633d442f2f3ece6f565e8adf9d4b73b8d66e19d994a9aff23-init/diff:/var/lib/docker/overlay2/d052914c945f7ab680be56190d2f2374e48b87c8da40d55e2692538d0bc19343/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e66734a63771fb633d442f2f3ece6f565e8adf9d4b73b8d66e19d994a9aff23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e66734a63771fb633d442f2f3ece6f565e8adf9d4b73b8d66e19d994a9aff23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e66734a63771fb633d442f2f3ece6f565e8adf9d4b73b8d66e19d994a9aff23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-291182",
	                "Source": "/var/lib/docker/volumes/multinode-291182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-291182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-291182",
	                "name.minikube.sigs.k8s.io": "multinode-291182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56d6f83ae4c761318247d84c00e83baea79b76fe38e18222172a0d4eb2e1dd65",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34367"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34363"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34365"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34364"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/56d6f83ae4c7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-291182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "065d29e000af",
	                        "multinode-291182"
	                    ],
	                    "NetworkID": "249c110faf759976f3e19edbc4ff6aef46e6bf059d393611b020da808688e182",
	                    "EndpointID": "048d936807d70c2acae26aa108f296d981c444917a2d2aec81f2d7239c1055ed",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-291182 -n multinode-291182
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 logs -n 25
E1101 01:02:55.882300 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-291182 logs -n 25: (1.58169954s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -p mount-start-2-221818                           | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	|         | --memory=2048 --mount                             |                      |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |                |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |                |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |                |                     |                     |
	|         | --driver=docker                                   |                      |         |                |                     |                     |
	|         | --container-runtime=crio                          |                      |         |                |                     |                     |
	| ssh     | mount-start-2-221818 ssh -- ls                    | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| delete  | -p mount-start-1-219914                           | mount-start-1-219914 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |                |                     |                     |
	| ssh     | mount-start-2-221818 ssh -- ls                    | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| stop    | -p mount-start-2-221818                           | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	| start   | -p mount-start-2-221818                           | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	| ssh     | mount-start-2-221818 ssh -- ls                    | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	|         | /minikube-host                                    |                      |         |                |                     |                     |
	| delete  | -p mount-start-2-221818                           | mount-start-2-221818 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	| delete  | -p mount-start-1-219914                           | mount-start-1-219914 | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:00 UTC |
	| start   | -p multinode-291182                               | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:00 UTC | 01 Nov 23 01:02 UTC |
	|         | --wait=true --memory=2200                         |                      |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |                |                     |                     |
	|         | --alsologtostderr                                 |                      |         |                |                     |                     |
	|         | --driver=docker                                   |                      |         |                |                     |                     |
	|         | --container-runtime=crio                          |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- apply -f                   | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- rollout                    | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | status deployment/busybox                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- get pods -o                | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- get pods -o                | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-2p499 --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-7m7pb --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-2p499 --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-7m7pb --                       |                      |         |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-2p499 -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-7m7pb -- nslookup              |                      |         |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- get pods -o                | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-2p499                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC |                     |
	|         | busybox-5bc68d56bd-2p499 -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC | 01 Nov 23 01:02 UTC |
	|         | busybox-5bc68d56bd-7m7pb                          |                      |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |                |                     |                     |
	| kubectl | -p multinode-291182 -- exec                       | multinode-291182     | jenkins | v1.32.0-beta.0 | 01 Nov 23 01:02 UTC |                     |
	|         | busybox-5bc68d56bd-7m7pb -- sh                    |                      |         |                |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 01:00:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 01:00:47.455875 1266961 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:00:47.456050 1266961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:00:47.456060 1266961 out.go:309] Setting ErrFile to fd 2...
	I1101 01:00:47.456066 1266961 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:00:47.456331 1266961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:00:47.456734 1266961 out.go:303] Setting JSON to false
	I1101 01:00:47.457807 1266961 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31395,"bootTime":1698769053,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 01:00:47.457881 1266961 start.go:138] virtualization:  
	I1101 01:00:47.460252 1266961 out.go:177] * [multinode-291182] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 01:00:47.462581 1266961 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:00:47.464317 1266961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:00:47.462709 1266961 notify.go:220] Checking for updates...
	I1101 01:00:47.468228 1266961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:00:47.470059 1266961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 01:00:47.472238 1266961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 01:00:47.474173 1266961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:00:47.475995 1266961 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:00:47.499516 1266961 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 01:00:47.499625 1266961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:00:47.576560 1266961 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-01 01:00:47.566346627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:00:47.576662 1266961 docker.go:295] overlay module found
	I1101 01:00:47.578738 1266961 out.go:177] * Using the docker driver based on user configuration
	I1101 01:00:47.580719 1266961 start.go:298] selected driver: docker
	I1101 01:00:47.580737 1266961 start.go:902] validating driver "docker" against <nil>
	I1101 01:00:47.580750 1266961 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:00:47.581427 1266961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:00:47.647646 1266961 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-01 01:00:47.63829165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:00:47.647808 1266961 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 01:00:47.648034 1266961 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 01:00:47.649724 1266961 out.go:177] * Using Docker driver with root privileges
	I1101 01:00:47.651292 1266961 cni.go:84] Creating CNI manager for ""
	I1101 01:00:47.651310 1266961 cni.go:136] 0 nodes found, recommending kindnet
	I1101 01:00:47.651319 1266961 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 01:00:47.651334 1266961 start_flags.go:323] config:
	{Name:multinode-291182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:47.653180 1266961 out.go:177] * Starting control plane node multinode-291182 in cluster multinode-291182
	I1101 01:00:47.655122 1266961 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 01:00:47.656711 1266961 out.go:177] * Pulling base image ...
	I1101 01:00:47.658299 1266961 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:47.658325 1266961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 01:00:47.658348 1266961 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1101 01:00:47.658356 1266961 cache.go:56] Caching tarball of preloaded images
	I1101 01:00:47.658434 1266961 preload.go:174] Found /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 01:00:47.658444 1266961 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 01:00:47.658818 1266961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/config.json ...
	I1101 01:00:47.658846 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/config.json: {Name:mkba687581e8d2b8f42f46ca1de69734de7c57fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:47.678019 1266961 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon, skipping pull
	I1101 01:00:47.678043 1266961 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 exists in daemon, skipping load
	I1101 01:00:47.678061 1266961 cache.go:194] Successfully downloaded all kic artifacts
	I1101 01:00:47.678122 1266961 start.go:365] acquiring machines lock for multinode-291182: {Name:mk8cc38d52665e4430232096c1ede04ac0fa5522 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:00:47.678247 1266961 start.go:369] acquired machines lock for "multinode-291182" in 101.071µs
	I1101 01:00:47.678279 1266961 start.go:93] Provisioning new machine with config: &{Name:multinode-291182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:00:47.678368 1266961 start.go:125] createHost starting for "" (driver="docker")
	I1101 01:00:47.680641 1266961 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1101 01:00:47.680895 1266961 start.go:159] libmachine.API.Create for "multinode-291182" (driver="docker")
	I1101 01:00:47.680932 1266961 client.go:168] LocalClient.Create starting
	I1101 01:00:47.681035 1266961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem
	I1101 01:00:47.681102 1266961 main.go:141] libmachine: Decoding PEM data...
	I1101 01:00:47.681124 1266961 main.go:141] libmachine: Parsing certificate...
	I1101 01:00:47.681171 1266961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem
	I1101 01:00:47.681194 1266961 main.go:141] libmachine: Decoding PEM data...
	I1101 01:00:47.681210 1266961 main.go:141] libmachine: Parsing certificate...
	I1101 01:00:47.681581 1266961 cli_runner.go:164] Run: docker network inspect multinode-291182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 01:00:47.698333 1266961 cli_runner.go:211] docker network inspect multinode-291182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 01:00:47.698415 1266961 network_create.go:281] running [docker network inspect multinode-291182] to gather additional debugging logs...
	I1101 01:00:47.698430 1266961 cli_runner.go:164] Run: docker network inspect multinode-291182
	W1101 01:00:47.715030 1266961 cli_runner.go:211] docker network inspect multinode-291182 returned with exit code 1
	I1101 01:00:47.715058 1266961 network_create.go:284] error running [docker network inspect multinode-291182]: docker network inspect multinode-291182: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-291182 not found
	I1101 01:00:47.715071 1266961 network_create.go:286] output of [docker network inspect multinode-291182]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-291182 not found
	
	** /stderr **
	I1101 01:00:47.715160 1266961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 01:00:47.732016 1266961 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b5f97457863e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:41:14:12:3e} reservation:<nil>}
	I1101 01:00:47.732378 1266961 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002564360}
	I1101 01:00:47.732407 1266961 network_create.go:124] attempt to create docker network multinode-291182 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1101 01:00:47.732462 1266961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-291182 multinode-291182
	I1101 01:00:47.799287 1266961 network_create.go:108] docker network multinode-291182 192.168.58.0/24 created
	I1101 01:00:47.799316 1266961 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-291182" container
	I1101 01:00:47.799388 1266961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 01:00:47.815347 1266961 cli_runner.go:164] Run: docker volume create multinode-291182 --label name.minikube.sigs.k8s.io=multinode-291182 --label created_by.minikube.sigs.k8s.io=true
	I1101 01:00:47.832823 1266961 oci.go:103] Successfully created a docker volume multinode-291182
	I1101 01:00:47.832907 1266961 cli_runner.go:164] Run: docker run --rm --name multinode-291182-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-291182 --entrypoint /usr/bin/test -v multinode-291182:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib
	I1101 01:00:48.405137 1266961 oci.go:107] Successfully prepared a docker volume multinode-291182
	I1101 01:00:48.405194 1266961 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:48.405220 1266961 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 01:00:48.405305 1266961 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-291182:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 01:00:52.840440 1266961 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-291182:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir: (4.43507104s)
	I1101 01:00:52.840471 1266961 kic.go:203] duration metric: took 4.435247 seconds to extract preloaded images to volume
	W1101 01:00:52.840596 1266961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 01:00:52.840706 1266961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 01:00:52.905896 1266961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-291182 --name multinode-291182 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-291182 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-291182 --network multinode-291182 --ip 192.168.58.2 --volume multinode-291182:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 01:00:53.250809 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Running}}
	I1101 01:00:53.272812 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:00:53.297123 1266961 cli_runner.go:164] Run: docker exec multinode-291182 stat /var/lib/dpkg/alternatives/iptables
	I1101 01:00:53.374013 1266961 oci.go:144] the created container "multinode-291182" has a running status.
	I1101 01:00:53.374040 1266961 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa...
	I1101 01:00:54.125836 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 01:00:54.125929 1266961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 01:00:54.153758 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:00:54.183083 1266961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 01:00:54.183101 1266961 kic_runner.go:114] Args: [docker exec --privileged multinode-291182 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 01:00:54.274074 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:00:54.302559 1266961 machine.go:88] provisioning docker machine ...
	I1101 01:00:54.302597 1266961 ubuntu.go:169] provisioning hostname "multinode-291182"
	I1101 01:00:54.302678 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:54.323868 1266961 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.324304 1266961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34367 <nil> <nil>}
	I1101 01:00:54.324317 1266961 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-291182 && echo "multinode-291182" | sudo tee /etc/hostname
	I1101 01:00:54.485120 1266961 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-291182
	
	I1101 01:00:54.485242 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:54.510123 1266961 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:54.510537 1266961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34367 <nil> <nil>}
	I1101 01:00:54.510563 1266961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-291182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-291182/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-291182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:00:54.650026 1266961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:00:54.650050 1266961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 01:00:54.650074 1266961 ubuntu.go:177] setting up certificates
	I1101 01:00:54.650083 1266961 provision.go:83] configureAuth start
	I1101 01:00:54.650146 1266961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182
	I1101 01:00:54.666989 1266961 provision.go:138] copyHostCerts
	I1101 01:00:54.667024 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:00:54.667052 1266961 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 01:00:54.667060 1266961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:00:54.667131 1266961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 01:00:54.667217 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:00:54.667236 1266961 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 01:00:54.667240 1266961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:00:54.667266 1266961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 01:00:54.667315 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:00:54.667330 1266961 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 01:00:54.667334 1266961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:00:54.667357 1266961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 01:00:54.667406 1266961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.multinode-291182 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-291182]
	I1101 01:00:54.880453 1266961 provision.go:172] copyRemoteCerts
	I1101 01:00:54.880518 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:00:54.880564 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:54.898775 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:00:54.999883 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 01:00:54.999978 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 01:00:55.035443 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 01:00:55.035511 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 01:00:55.064876 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 01:00:55.064940 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:00:55.095095 1266961 provision.go:86] duration metric: configureAuth took 444.995896ms
	I1101 01:00:55.095129 1266961 ubuntu.go:193] setting minikube options for container-runtime
	I1101 01:00:55.095332 1266961 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:00:55.095467 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:55.114219 1266961 main.go:141] libmachine: Using SSH client type: native
	I1101 01:00:55.114661 1266961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34367 <nil> <nil>}
	I1101 01:00:55.114687 1266961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:00:55.371415 1266961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:00:55.371439 1266961 machine.go:91] provisioned docker machine in 1.068861635s
	I1101 01:00:55.371450 1266961 client.go:171] LocalClient.Create took 7.69050785s
	I1101 01:00:55.371477 1266961 start.go:167] duration metric: libmachine.API.Create for "multinode-291182" took 7.690576207s
	I1101 01:00:55.371485 1266961 start.go:300] post-start starting for "multinode-291182" (driver="docker")
	I1101 01:00:55.371495 1266961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:00:55.371555 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:00:55.371603 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:55.391550 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:00:55.491553 1266961 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:00:55.495636 1266961 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1101 01:00:55.495692 1266961 command_runner.go:130] > NAME="Ubuntu"
	I1101 01:00:55.495708 1266961 command_runner.go:130] > VERSION_ID="22.04"
	I1101 01:00:55.495720 1266961 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1101 01:00:55.495729 1266961 command_runner.go:130] > VERSION_CODENAME=jammy
	I1101 01:00:55.495734 1266961 command_runner.go:130] > ID=ubuntu
	I1101 01:00:55.495740 1266961 command_runner.go:130] > ID_LIKE=debian
	I1101 01:00:55.495746 1266961 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1101 01:00:55.495759 1266961 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1101 01:00:55.495775 1266961 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1101 01:00:55.495786 1266961 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1101 01:00:55.495792 1266961 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1101 01:00:55.495895 1266961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 01:00:55.495927 1266961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 01:00:55.495942 1266961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 01:00:55.495954 1266961 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 01:00:55.495965 1266961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 01:00:55.496021 1266961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 01:00:55.496103 1266961 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 01:00:55.496114 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /etc/ssl/certs/12028972.pem
	I1101 01:00:55.496224 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:00:55.507273 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:00:55.535646 1266961 start.go:303] post-start completed in 164.145595ms
	I1101 01:00:55.535999 1266961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182
	I1101 01:00:55.553783 1266961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/config.json ...
	I1101 01:00:55.554044 1266961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:00:55.554098 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:55.571471 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:00:55.666578 1266961 command_runner.go:130] > 11%!
	(MISSING)I1101 01:00:55.667115 1266961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 01:00:55.672468 1266961 command_runner.go:130] > 174G
	I1101 01:00:55.672891 1266961 start.go:128] duration metric: createHost completed in 7.994506352s
	I1101 01:00:55.672908 1266961 start.go:83] releasing machines lock for "multinode-291182", held for 7.994648088s
	I1101 01:00:55.672979 1266961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182
	I1101 01:00:55.690361 1266961 ssh_runner.go:195] Run: cat /version.json
	I1101 01:00:55.690386 1266961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:00:55.690413 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:55.690454 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:00:55.710351 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:00:55.720673 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:00:55.940251 1266961 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 01:00:55.943704 1266961 command_runner.go:130] > {"iso_version": "v1.32.0-1698684775-17527", "kicbase_version": "v0.0.41-1698773672-17486", "minikube_version": "v1.32.0-beta.0", "commit": "01e1cff766666ed9b9dd97c2a32d71cdb94ff3cf"}
	I1101 01:00:55.943915 1266961 ssh_runner.go:195] Run: systemctl --version
	I1101 01:00:55.949059 1266961 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1101 01:00:55.949090 1266961 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1101 01:00:55.949472 1266961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:00:56.100376 1266961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 01:00:56.105640 1266961 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1101 01:00:56.105709 1266961 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1101 01:00:56.105723 1266961 command_runner.go:130] > Device: 3ah/58d	Inode: 1823288     Links: 1
	I1101 01:00:56.105732 1266961 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 01:00:56.105739 1266961 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1101 01:00:56.105746 1266961 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1101 01:00:56.105755 1266961 command_runner.go:130] > Change: 2023-11-01 00:32:33.104025601 +0000
	I1101 01:00:56.105762 1266961 command_runner.go:130] >  Birth: 2023-11-01 00:32:33.104025601 +0000
	I1101 01:00:56.106053 1266961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.131088 1266961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 01:00:56.131161 1266961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:00:56.171606 1266961 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1101 01:00:56.171634 1266961 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 01:00:56.171680 1266961 start.go:472] detecting cgroup driver to use...
	I1101 01:00:56.171717 1266961 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 01:00:56.171797 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:00:56.190406 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:00:56.203982 1266961 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:00:56.204066 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:00:56.220620 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:00:56.236632 1266961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:00:56.333336 1266961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:00:56.438552 1266961 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1101 01:00:56.438578 1266961 docker.go:220] disabling docker service ...
	I1101 01:00:56.438647 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:00:56.460352 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:00:56.473975 1266961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:00:56.487834 1266961 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1101 01:00:56.576236 1266961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:00:56.689212 1266961 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1101 01:00:56.689305 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:00:56.702452 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:00:56.720478 1266961 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 01:00:56.721812 1266961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:00:56.721876 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:56.733894 1266961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:00:56.734018 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:56.745721 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:56.757406 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:00:56.769195 1266961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:00:56.781889 1266961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:00:56.791116 1266961 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 01:00:56.792389 1266961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:00:56.802442 1266961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:00:56.891766 1266961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:00:57.005218 1266961 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:00:57.005318 1266961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:00:57.010503 1266961 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 01:00:57.010525 1266961 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 01:00:57.010561 1266961 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1101 01:00:57.010574 1266961 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 01:00:57.010584 1266961 command_runner.go:130] > Access: 2023-11-01 01:00:56.989607987 +0000
	I1101 01:00:57.010592 1266961 command_runner.go:130] > Modify: 2023-11-01 01:00:56.989607987 +0000
	I1101 01:00:57.010602 1266961 command_runner.go:130] > Change: 2023-11-01 01:00:56.989607987 +0000
	I1101 01:00:57.010610 1266961 command_runner.go:130] >  Birth: -
	I1101 01:00:57.010641 1266961 start.go:540] Will wait 60s for crictl version
	I1101 01:00:57.010704 1266961 ssh_runner.go:195] Run: which crictl
	I1101 01:00:57.014895 1266961 command_runner.go:130] > /usr/bin/crictl
	I1101 01:00:57.015354 1266961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:00:57.056345 1266961 command_runner.go:130] > Version:  0.1.0
	I1101 01:00:57.056363 1266961 command_runner.go:130] > RuntimeName:  cri-o
	I1101 01:00:57.056369 1266961 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1101 01:00:57.056376 1266961 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 01:00:57.058778 1266961 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 01:00:57.058872 1266961 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.106869 1266961 command_runner.go:130] > crio version 1.24.6
	I1101 01:00:57.106891 1266961 command_runner.go:130] > Version:          1.24.6
	I1101 01:00:57.106901 1266961 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1101 01:00:57.106907 1266961 command_runner.go:130] > GitTreeState:     clean
	I1101 01:00:57.106913 1266961 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1101 01:00:57.106919 1266961 command_runner.go:130] > GoVersion:        go1.18.2
	I1101 01:00:57.106925 1266961 command_runner.go:130] > Compiler:         gc
	I1101 01:00:57.106931 1266961 command_runner.go:130] > Platform:         linux/arm64
	I1101 01:00:57.106941 1266961 command_runner.go:130] > Linkmode:         dynamic
	I1101 01:00:57.106955 1266961 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 01:00:57.106963 1266961 command_runner.go:130] > SeccompEnabled:   true
	I1101 01:00:57.106968 1266961 command_runner.go:130] > AppArmorEnabled:  false
	I1101 01:00:57.109146 1266961 ssh_runner.go:195] Run: crio --version
	I1101 01:00:57.150742 1266961 command_runner.go:130] > crio version 1.24.6
	I1101 01:00:57.150764 1266961 command_runner.go:130] > Version:          1.24.6
	I1101 01:00:57.150773 1266961 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1101 01:00:57.150779 1266961 command_runner.go:130] > GitTreeState:     clean
	I1101 01:00:57.150792 1266961 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1101 01:00:57.150798 1266961 command_runner.go:130] > GoVersion:        go1.18.2
	I1101 01:00:57.150809 1266961 command_runner.go:130] > Compiler:         gc
	I1101 01:00:57.150816 1266961 command_runner.go:130] > Platform:         linux/arm64
	I1101 01:00:57.150826 1266961 command_runner.go:130] > Linkmode:         dynamic
	I1101 01:00:57.150844 1266961 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 01:00:57.150853 1266961 command_runner.go:130] > SeccompEnabled:   true
	I1101 01:00:57.150859 1266961 command_runner.go:130] > AppArmorEnabled:  false
	I1101 01:00:57.154861 1266961 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 01:00:57.156795 1266961 cli_runner.go:164] Run: docker network inspect multinode-291182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 01:00:57.173231 1266961 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1101 01:00:57.177622 1266961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.190657 1266961 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:00:57.190725 1266961 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.255339 1266961 command_runner.go:130] > {
	I1101 01:00:57.255360 1266961 command_runner.go:130] >   "images": [
	I1101 01:00:57.255367 1266961 command_runner.go:130] >     {
	I1101 01:00:57.255379 1266961 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1101 01:00:57.255385 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.255393 1266961 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1101 01:00:57.255397 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255403 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.255414 1266961 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1101 01:00:57.255423 1266961 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1101 01:00:57.255430 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255436 1266961 command_runner.go:130] >       "size": "60867618",
	I1101 01:00:57.255445 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.255450 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.255464 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.255473 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.255478 1266961 command_runner.go:130] >     },
	I1101 01:00:57.255482 1266961 command_runner.go:130] >     {
	I1101 01:00:57.255490 1266961 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1101 01:00:57.255495 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.255508 1266961 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 01:00:57.255516 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255521 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.255531 1266961 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1101 01:00:57.255545 1266961 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1101 01:00:57.255550 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255561 1266961 command_runner.go:130] >       "size": "29037500",
	I1101 01:00:57.255566 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.255571 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.255576 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.255581 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.255587 1266961 command_runner.go:130] >     },
	I1101 01:00:57.255592 1266961 command_runner.go:130] >     {
	I1101 01:00:57.255602 1266961 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1101 01:00:57.255617 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.255624 1266961 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1101 01:00:57.255629 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255636 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.255647 1266961 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1101 01:00:57.255659 1266961 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1101 01:00:57.255666 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255674 1266961 command_runner.go:130] >       "size": "51393451",
	I1101 01:00:57.255679 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.255687 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.255692 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.255698 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.255702 1266961 command_runner.go:130] >     },
	I1101 01:00:57.255709 1266961 command_runner.go:130] >     {
	I1101 01:00:57.255717 1266961 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1101 01:00:57.255722 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.255728 1266961 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1101 01:00:57.255735 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255740 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.255752 1266961 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1101 01:00:57.255763 1266961 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1101 01:00:57.255773 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255783 1266961 command_runner.go:130] >       "size": "182203183",
	I1101 01:00:57.255788 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.255795 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.255800 1266961 command_runner.go:130] >       },
	I1101 01:00:57.255807 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.255812 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.255818 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.255822 1266961 command_runner.go:130] >     },
	I1101 01:00:57.255829 1266961 command_runner.go:130] >     {
	I1101 01:00:57.255837 1266961 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1101 01:00:57.255844 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.255851 1266961 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1101 01:00:57.255856 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255863 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.255874 1266961 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1101 01:00:57.255887 1266961 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1101 01:00:57.255891 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255897 1266961 command_runner.go:130] >       "size": "121054158",
	I1101 01:00:57.255903 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.255910 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.255916 1266961 command_runner.go:130] >       },
	I1101 01:00:57.255921 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.255929 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.255934 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.255938 1266961 command_runner.go:130] >     },
	I1101 01:00:57.255942 1266961 command_runner.go:130] >     {
	I1101 01:00:57.255952 1266961 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1101 01:00:57.255959 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.255966 1266961 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1101 01:00:57.255970 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.255976 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.255989 1266961 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1101 01:00:57.255998 1266961 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1101 01:00:57.256005 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256010 1266961 command_runner.go:130] >       "size": "117252916",
	I1101 01:00:57.256015 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.256022 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.256029 1266961 command_runner.go:130] >       },
	I1101 01:00:57.256034 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.256039 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.256047 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.256051 1266961 command_runner.go:130] >     },
	I1101 01:00:57.256055 1266961 command_runner.go:130] >     {
	I1101 01:00:57.256065 1266961 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1101 01:00:57.256072 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.256079 1266961 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1101 01:00:57.256083 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256091 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.256100 1266961 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1101 01:00:57.256113 1266961 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1101 01:00:57.256117 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256123 1266961 command_runner.go:130] >       "size": "69926807",
	I1101 01:00:57.256130 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.256135 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.256142 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.256150 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.256154 1266961 command_runner.go:130] >     },
	I1101 01:00:57.256159 1266961 command_runner.go:130] >     {
	I1101 01:00:57.256169 1266961 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1101 01:00:57.256177 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.256183 1266961 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1101 01:00:57.256188 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256195 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.256231 1266961 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1101 01:00:57.256244 1266961 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1101 01:00:57.256249 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256256 1266961 command_runner.go:130] >       "size": "59188020",
	I1101 01:00:57.256265 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.256270 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.256274 1266961 command_runner.go:130] >       },
	I1101 01:00:57.256279 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.256287 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.256294 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.256301 1266961 command_runner.go:130] >     },
	I1101 01:00:57.256305 1266961 command_runner.go:130] >     {
	I1101 01:00:57.256313 1266961 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1101 01:00:57.256321 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.256326 1266961 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1101 01:00:57.256333 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256340 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.256351 1266961 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1101 01:00:57.256360 1266961 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1101 01:00:57.256367 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.256372 1266961 command_runner.go:130] >       "size": "520014",
	I1101 01:00:57.256377 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.256385 1266961 command_runner.go:130] >         "value": "65535"
	I1101 01:00:57.256389 1266961 command_runner.go:130] >       },
	I1101 01:00:57.256394 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.256407 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.256412 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.256421 1266961 command_runner.go:130] >     }
	I1101 01:00:57.256430 1266961 command_runner.go:130] >   ]
	I1101 01:00:57.256434 1266961 command_runner.go:130] > }
	I1101 01:00:57.259282 1266961 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:57.259305 1266961 crio.go:415] Images already preloaded, skipping extraction
	I1101 01:00:57.259367 1266961 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 01:00:57.298507 1266961 command_runner.go:130] > {
	I1101 01:00:57.298529 1266961 command_runner.go:130] >   "images": [
	I1101 01:00:57.298535 1266961 command_runner.go:130] >     {
	I1101 01:00:57.298545 1266961 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1101 01:00:57.298550 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.298558 1266961 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1101 01:00:57.298563 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298568 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.298579 1266961 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1101 01:00:57.298588 1266961 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1101 01:00:57.298593 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298601 1266961 command_runner.go:130] >       "size": "60867618",
	I1101 01:00:57.298607 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.298617 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.298626 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.298634 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.298639 1266961 command_runner.go:130] >     },
	I1101 01:00:57.298643 1266961 command_runner.go:130] >     {
	I1101 01:00:57.298656 1266961 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1101 01:00:57.298662 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.298671 1266961 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1101 01:00:57.298676 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298681 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.298691 1266961 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1101 01:00:57.298700 1266961 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1101 01:00:57.298705 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298711 1266961 command_runner.go:130] >       "size": "29037500",
	I1101 01:00:57.298716 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.298721 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.298725 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.298730 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.298734 1266961 command_runner.go:130] >     },
	I1101 01:00:57.298738 1266961 command_runner.go:130] >     {
	I1101 01:00:57.298746 1266961 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1101 01:00:57.298751 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.298759 1266961 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1101 01:00:57.298766 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298774 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.298783 1266961 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1101 01:00:57.298795 1266961 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1101 01:00:57.298800 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298805 1266961 command_runner.go:130] >       "size": "51393451",
	I1101 01:00:57.298813 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.298818 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.298823 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.298830 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.298834 1266961 command_runner.go:130] >     },
	I1101 01:00:57.298839 1266961 command_runner.go:130] >     {
	I1101 01:00:57.298847 1266961 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1101 01:00:57.298854 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.298861 1266961 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1101 01:00:57.298868 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298873 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.298882 1266961 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1101 01:00:57.298897 1266961 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1101 01:00:57.298912 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298918 1266961 command_runner.go:130] >       "size": "182203183",
	I1101 01:00:57.298923 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.298928 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.298934 1266961 command_runner.go:130] >       },
	I1101 01:00:57.298940 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.298948 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.298953 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.298957 1266961 command_runner.go:130] >     },
	I1101 01:00:57.298962 1266961 command_runner.go:130] >     {
	I1101 01:00:57.298971 1266961 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1101 01:00:57.298979 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.298985 1266961 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1101 01:00:57.298990 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.298995 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.299007 1266961 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1101 01:00:57.299017 1266961 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1101 01:00:57.299029 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299034 1266961 command_runner.go:130] >       "size": "121054158",
	I1101 01:00:57.299039 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.299047 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.299051 1266961 command_runner.go:130] >       },
	I1101 01:00:57.299057 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.299064 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.299069 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.299074 1266961 command_runner.go:130] >     },
	I1101 01:00:57.299078 1266961 command_runner.go:130] >     {
	I1101 01:00:57.299092 1266961 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1101 01:00:57.299098 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.299107 1266961 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1101 01:00:57.299115 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299120 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.299129 1266961 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1101 01:00:57.299142 1266961 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1101 01:00:57.299147 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299156 1266961 command_runner.go:130] >       "size": "117252916",
	I1101 01:00:57.299161 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.299166 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.299173 1266961 command_runner.go:130] >       },
	I1101 01:00:57.299178 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.299183 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.299189 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.299196 1266961 command_runner.go:130] >     },
	I1101 01:00:57.299200 1266961 command_runner.go:130] >     {
	I1101 01:00:57.299215 1266961 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1101 01:00:57.299220 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.299226 1266961 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1101 01:00:57.299233 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299238 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.299247 1266961 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1101 01:00:57.299260 1266961 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1101 01:00:57.299267 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299273 1266961 command_runner.go:130] >       "size": "69926807",
	I1101 01:00:57.299284 1266961 command_runner.go:130] >       "uid": null,
	I1101 01:00:57.299290 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.299295 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.299303 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.299307 1266961 command_runner.go:130] >     },
	I1101 01:00:57.299311 1266961 command_runner.go:130] >     {
	I1101 01:00:57.299322 1266961 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1101 01:00:57.299327 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.299333 1266961 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1101 01:00:57.299338 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299343 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.299390 1266961 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1101 01:00:57.299405 1266961 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1101 01:00:57.299410 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299415 1266961 command_runner.go:130] >       "size": "59188020",
	I1101 01:00:57.299420 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.299425 1266961 command_runner.go:130] >         "value": "0"
	I1101 01:00:57.299440 1266961 command_runner.go:130] >       },
	I1101 01:00:57.299447 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.299453 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.299460 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.299465 1266961 command_runner.go:130] >     },
	I1101 01:00:57.299469 1266961 command_runner.go:130] >     {
	I1101 01:00:57.299479 1266961 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1101 01:00:57.299485 1266961 command_runner.go:130] >       "repoTags": [
	I1101 01:00:57.299490 1266961 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1101 01:00:57.299497 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299502 1266961 command_runner.go:130] >       "repoDigests": [
	I1101 01:00:57.299511 1266961 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1101 01:00:57.299522 1266961 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1101 01:00:57.299529 1266961 command_runner.go:130] >       ],
	I1101 01:00:57.299534 1266961 command_runner.go:130] >       "size": "520014",
	I1101 01:00:57.299538 1266961 command_runner.go:130] >       "uid": {
	I1101 01:00:57.299544 1266961 command_runner.go:130] >         "value": "65535"
	I1101 01:00:57.299551 1266961 command_runner.go:130] >       },
	I1101 01:00:57.299556 1266961 command_runner.go:130] >       "username": "",
	I1101 01:00:57.299567 1266961 command_runner.go:130] >       "spec": null,
	I1101 01:00:57.299575 1266961 command_runner.go:130] >       "pinned": false
	I1101 01:00:57.299579 1266961 command_runner.go:130] >     }
	I1101 01:00:57.299583 1266961 command_runner.go:130] >   ]
	I1101 01:00:57.299587 1266961 command_runner.go:130] > }
	I1101 01:00:57.302430 1266961 crio.go:496] all images are preloaded for cri-o runtime.
	I1101 01:00:57.302452 1266961 cache_images.go:84] Images are preloaded, skipping loading
	I1101 01:00:57.302528 1266961 ssh_runner.go:195] Run: crio config
	I1101 01:00:57.351726 1266961 command_runner.go:130] ! time="2023-11-01 01:00:57.351384222Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1101 01:00:57.352057 1266961 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 01:00:57.368089 1266961 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 01:00:57.368109 1266961 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 01:00:57.368117 1266961 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 01:00:57.368123 1266961 command_runner.go:130] > #
	I1101 01:00:57.368132 1266961 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 01:00:57.368140 1266961 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 01:00:57.368148 1266961 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 01:00:57.368156 1266961 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 01:00:57.368160 1266961 command_runner.go:130] > # reload'.
	I1101 01:00:57.368168 1266961 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 01:00:57.368176 1266961 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 01:00:57.368184 1266961 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 01:00:57.368198 1266961 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 01:00:57.368202 1266961 command_runner.go:130] > [crio]
	I1101 01:00:57.368209 1266961 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 01:00:57.368220 1266961 command_runner.go:130] > # containers images, in this directory.
	I1101 01:00:57.368229 1266961 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1101 01:00:57.368241 1266961 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 01:00:57.368247 1266961 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1101 01:00:57.368257 1266961 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 01:00:57.368265 1266961 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 01:00:57.368270 1266961 command_runner.go:130] > # storage_driver = "vfs"
	I1101 01:00:57.368277 1266961 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 01:00:57.368288 1266961 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 01:00:57.368292 1266961 command_runner.go:130] > # storage_option = [
	I1101 01:00:57.368298 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.368309 1266961 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 01:00:57.368316 1266961 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 01:00:57.368322 1266961 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 01:00:57.368334 1266961 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 01:00:57.368349 1266961 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 01:00:57.368355 1266961 command_runner.go:130] > # always happen on a node reboot
	I1101 01:00:57.368361 1266961 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 01:00:57.368368 1266961 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 01:00:57.368375 1266961 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 01:00:57.368384 1266961 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 01:00:57.368390 1266961 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 01:00:57.368409 1266961 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 01:00:57.368419 1266961 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 01:00:57.368424 1266961 command_runner.go:130] > # internal_wipe = true
	I1101 01:00:57.368432 1266961 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 01:00:57.368439 1266961 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 01:00:57.368479 1266961 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 01:00:57.368486 1266961 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 01:00:57.368493 1266961 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 01:00:57.368498 1266961 command_runner.go:130] > [crio.api]
	I1101 01:00:57.368505 1266961 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 01:00:57.368510 1266961 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 01:00:57.368521 1266961 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 01:00:57.368530 1266961 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 01:00:57.368538 1266961 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 01:00:57.368548 1266961 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 01:00:57.368553 1266961 command_runner.go:130] > # stream_port = "0"
	I1101 01:00:57.368559 1266961 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 01:00:57.368566 1266961 command_runner.go:130] > # stream_enable_tls = false
	I1101 01:00:57.368578 1266961 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 01:00:57.368583 1266961 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 01:00:57.368591 1266961 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 01:00:57.368600 1266961 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 01:00:57.368605 1266961 command_runner.go:130] > # minutes.
	I1101 01:00:57.368610 1266961 command_runner.go:130] > # stream_tls_cert = ""
	I1101 01:00:57.368619 1266961 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 01:00:57.368630 1266961 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 01:00:57.368636 1266961 command_runner.go:130] > # stream_tls_key = ""
	I1101 01:00:57.368649 1266961 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 01:00:57.368656 1266961 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 01:00:57.368669 1266961 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 01:00:57.368674 1266961 command_runner.go:130] > # stream_tls_ca = ""
	I1101 01:00:57.368683 1266961 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 01:00:57.368691 1266961 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1101 01:00:57.368700 1266961 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 01:00:57.368709 1266961 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1101 01:00:57.368729 1266961 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 01:00:57.368738 1266961 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 01:00:57.368743 1266961 command_runner.go:130] > [crio.runtime]
	I1101 01:00:57.368750 1266961 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 01:00:57.368756 1266961 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 01:00:57.368761 1266961 command_runner.go:130] > # "nofile=1024:2048"
	I1101 01:00:57.368769 1266961 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 01:00:57.368774 1266961 command_runner.go:130] > # default_ulimits = [
	I1101 01:00:57.368778 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.368785 1266961 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 01:00:57.368790 1266961 command_runner.go:130] > # no_pivot = false
	I1101 01:00:57.368797 1266961 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 01:00:57.368806 1266961 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 01:00:57.368819 1266961 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 01:00:57.368826 1266961 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 01:00:57.368838 1266961 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 01:00:57.368846 1266961 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 01:00:57.368854 1266961 command_runner.go:130] > # conmon = ""
	I1101 01:00:57.368860 1266961 command_runner.go:130] > # Cgroup setting for conmon
	I1101 01:00:57.368868 1266961 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 01:00:57.368877 1266961 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 01:00:57.368884 1266961 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 01:00:57.368890 1266961 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 01:00:57.368899 1266961 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 01:00:57.368907 1266961 command_runner.go:130] > # conmon_env = [
	I1101 01:00:57.368911 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.368917 1266961 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 01:00:57.368927 1266961 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 01:00:57.368934 1266961 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 01:00:57.368942 1266961 command_runner.go:130] > # default_env = [
	I1101 01:00:57.368949 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.368961 1266961 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 01:00:57.368966 1266961 command_runner.go:130] > # selinux = false
	I1101 01:00:57.368974 1266961 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 01:00:57.368996 1266961 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 01:00:57.369008 1266961 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 01:00:57.369013 1266961 command_runner.go:130] > # seccomp_profile = ""
	I1101 01:00:57.369020 1266961 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 01:00:57.369030 1266961 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 01:00:57.369038 1266961 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 01:00:57.369048 1266961 command_runner.go:130] > # which might increase security.
	I1101 01:00:57.369054 1266961 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1101 01:00:57.369061 1266961 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 01:00:57.369069 1266961 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 01:00:57.369081 1266961 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 01:00:57.369090 1266961 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 01:00:57.369100 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:00:57.369106 1266961 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 01:00:57.369119 1266961 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 01:00:57.369124 1266961 command_runner.go:130] > # the cgroup blockio controller.
	I1101 01:00:57.369136 1266961 command_runner.go:130] > # blockio_config_file = ""
	I1101 01:00:57.369144 1266961 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 01:00:57.369149 1266961 command_runner.go:130] > # irqbalance daemon.
	I1101 01:00:57.369155 1266961 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 01:00:57.369167 1266961 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 01:00:57.369174 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:00:57.369183 1266961 command_runner.go:130] > # rdt_config_file = ""
	I1101 01:00:57.369189 1266961 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 01:00:57.369195 1266961 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 01:00:57.369208 1266961 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 01:00:57.369213 1266961 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 01:00:57.369221 1266961 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 01:00:57.369228 1266961 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 01:00:57.369233 1266961 command_runner.go:130] > # will be added.
	I1101 01:00:57.369239 1266961 command_runner.go:130] > # default_capabilities = [
	I1101 01:00:57.369245 1266961 command_runner.go:130] > # 	"CHOWN",
	I1101 01:00:57.369257 1266961 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 01:00:57.369262 1266961 command_runner.go:130] > # 	"FSETID",
	I1101 01:00:57.369272 1266961 command_runner.go:130] > # 	"FOWNER",
	I1101 01:00:57.369277 1266961 command_runner.go:130] > # 	"SETGID",
	I1101 01:00:57.369281 1266961 command_runner.go:130] > # 	"SETUID",
	I1101 01:00:57.369290 1266961 command_runner.go:130] > # 	"SETPCAP",
	I1101 01:00:57.369295 1266961 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 01:00:57.369300 1266961 command_runner.go:130] > # 	"KILL",
	I1101 01:00:57.369304 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.369313 1266961 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1101 01:00:57.369323 1266961 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1101 01:00:57.369329 1266961 command_runner.go:130] > # add_inheritable_capabilities = true
	I1101 01:00:57.369338 1266961 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 01:00:57.369349 1266961 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 01:00:57.369354 1266961 command_runner.go:130] > # default_sysctls = [
	I1101 01:00:57.369364 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.369370 1266961 command_runner.go:130] > # List of devices on the host that a
	I1101 01:00:57.369377 1266961 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 01:00:57.369387 1266961 command_runner.go:130] > # allowed_devices = [
	I1101 01:00:57.369392 1266961 command_runner.go:130] > # 	"/dev/fuse",
	I1101 01:00:57.369396 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.369403 1266961 command_runner.go:130] > # List of additional devices. specified as
	I1101 01:00:57.369432 1266961 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 01:00:57.369443 1266961 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 01:00:57.369450 1266961 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 01:00:57.369459 1266961 command_runner.go:130] > # additional_devices = [
	I1101 01:00:57.369464 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.369470 1266961 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 01:00:57.369478 1266961 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 01:00:57.369483 1266961 command_runner.go:130] > # 	"/etc/cdi",
	I1101 01:00:57.369488 1266961 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 01:00:57.369492 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.369502 1266961 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 01:00:57.369510 1266961 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 01:00:57.369518 1266961 command_runner.go:130] > # Defaults to false.
	I1101 01:00:57.369528 1266961 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 01:00:57.369538 1266961 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 01:00:57.369549 1266961 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 01:00:57.369554 1266961 command_runner.go:130] > # hooks_dir = [
	I1101 01:00:57.369565 1266961 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 01:00:57.369569 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.369582 1266961 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 01:00:57.369592 1266961 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 01:00:57.369604 1266961 command_runner.go:130] > # its default mounts from the following two files:
	I1101 01:00:57.369612 1266961 command_runner.go:130] > #
	I1101 01:00:57.369619 1266961 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 01:00:57.369632 1266961 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 01:00:57.369639 1266961 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 01:00:57.369647 1266961 command_runner.go:130] > #
	I1101 01:00:57.369654 1266961 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 01:00:57.369662 1266961 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 01:00:57.369670 1266961 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 01:00:57.369678 1266961 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 01:00:57.369682 1266961 command_runner.go:130] > #
	I1101 01:00:57.369688 1266961 command_runner.go:130] > # default_mounts_file = ""
	I1101 01:00:57.369699 1266961 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 01:00:57.369707 1266961 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 01:00:57.369715 1266961 command_runner.go:130] > # pids_limit = 0
	I1101 01:00:57.369722 1266961 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 01:00:57.369738 1266961 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 01:00:57.369746 1266961 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 01:00:57.369757 1266961 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 01:00:57.369764 1266961 command_runner.go:130] > # log_size_max = -1
	I1101 01:00:57.369772 1266961 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 01:00:57.369780 1266961 command_runner.go:130] > # log_to_journald = false
	I1101 01:00:57.369790 1266961 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 01:00:57.369796 1266961 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 01:00:57.369802 1266961 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 01:00:57.369811 1266961 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 01:00:57.369817 1266961 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 01:00:57.369826 1266961 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 01:00:57.369832 1266961 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 01:00:57.369839 1266961 command_runner.go:130] > # read_only = false
	I1101 01:00:57.369847 1266961 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 01:00:57.369854 1266961 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 01:00:57.369859 1266961 command_runner.go:130] > # live configuration reload.
	I1101 01:00:57.369865 1266961 command_runner.go:130] > # log_level = "info"
	I1101 01:00:57.369874 1266961 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 01:00:57.369882 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:00:57.369887 1266961 command_runner.go:130] > # log_filter = ""
	I1101 01:00:57.369897 1266961 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 01:00:57.369904 1266961 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 01:00:57.369909 1266961 command_runner.go:130] > # separated by comma.
	I1101 01:00:57.369916 1266961 command_runner.go:130] > # uid_mappings = ""
	I1101 01:00:57.369924 1266961 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 01:00:57.369931 1266961 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 01:00:57.369936 1266961 command_runner.go:130] > # separated by comma.
	I1101 01:00:57.369941 1266961 command_runner.go:130] > # gid_mappings = ""
	I1101 01:00:57.369953 1266961 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 01:00:57.369964 1266961 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 01:00:57.369976 1266961 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 01:00:57.369982 1266961 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 01:00:57.369989 1266961 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 01:00:57.370001 1266961 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 01:00:57.370008 1266961 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 01:00:57.370014 1266961 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 01:00:57.370021 1266961 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 01:00:57.370030 1266961 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 01:00:57.370041 1266961 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 01:00:57.370048 1266961 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 01:00:57.370057 1266961 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 01:00:57.370069 1266961 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 01:00:57.370078 1266961 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 01:00:57.370085 1266961 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 01:00:57.370090 1266961 command_runner.go:130] > # drop_infra_ctr = true
	I1101 01:00:57.370097 1266961 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 01:00:57.370104 1266961 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 01:00:57.370113 1266961 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 01:00:57.370123 1266961 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 01:00:57.370130 1266961 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 01:00:57.370136 1266961 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 01:00:57.370144 1266961 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 01:00:57.370166 1266961 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 01:00:57.370173 1266961 command_runner.go:130] > # pinns_path = ""
	I1101 01:00:57.370180 1266961 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 01:00:57.370188 1266961 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 01:00:57.370195 1266961 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 01:00:57.370203 1266961 command_runner.go:130] > # default_runtime = "runc"
	I1101 01:00:57.370209 1266961 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 01:00:57.370218 1266961 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 01:00:57.370231 1266961 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 01:00:57.370244 1266961 command_runner.go:130] > # creation as a file is not desired either.
	I1101 01:00:57.370254 1266961 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 01:00:57.370261 1266961 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 01:00:57.370266 1266961 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 01:00:57.370278 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.370287 1266961 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 01:00:57.370295 1266961 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 01:00:57.370306 1266961 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 01:00:57.370314 1266961 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 01:00:57.370321 1266961 command_runner.go:130] > #
	I1101 01:00:57.370326 1266961 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 01:00:57.370332 1266961 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 01:00:57.370337 1266961 command_runner.go:130] > #  runtime_type = "oci"
	I1101 01:00:57.370344 1266961 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 01:00:57.370350 1266961 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 01:00:57.370358 1266961 command_runner.go:130] > #  allowed_annotations = []
	I1101 01:00:57.370363 1266961 command_runner.go:130] > # Where:
	I1101 01:00:57.370369 1266961 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 01:00:57.370379 1266961 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 01:00:57.370387 1266961 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 01:00:57.370397 1266961 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 01:00:57.370402 1266961 command_runner.go:130] > #   in $PATH.
	I1101 01:00:57.370414 1266961 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 01:00:57.370421 1266961 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 01:00:57.370429 1266961 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 01:00:57.370436 1266961 command_runner.go:130] > #   state.
	I1101 01:00:57.370444 1266961 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 01:00:57.370454 1266961 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 01:00:57.370462 1266961 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 01:00:57.370471 1266961 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 01:00:57.370478 1266961 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 01:00:57.370489 1266961 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 01:00:57.370494 1266961 command_runner.go:130] > #   The currently recognized values are:
	I1101 01:00:57.370502 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 01:00:57.370511 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 01:00:57.370521 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 01:00:57.370531 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 01:00:57.370540 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 01:00:57.370550 1266961 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 01:00:57.370558 1266961 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 01:00:57.370569 1266961 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 01:00:57.370577 1266961 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 01:00:57.370582 1266961 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 01:00:57.370588 1266961 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1101 01:00:57.370593 1266961 command_runner.go:130] > runtime_type = "oci"
	I1101 01:00:57.370598 1266961 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 01:00:57.370606 1266961 command_runner.go:130] > runtime_config_path = ""
	I1101 01:00:57.370610 1266961 command_runner.go:130] > monitor_path = ""
	I1101 01:00:57.370615 1266961 command_runner.go:130] > monitor_cgroup = ""
	I1101 01:00:57.370620 1266961 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 01:00:57.370659 1266961 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 01:00:57.370667 1266961 command_runner.go:130] > # running containers
	I1101 01:00:57.370672 1266961 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 01:00:57.370680 1266961 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 01:00:57.370691 1266961 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 01:00:57.370698 1266961 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 01:00:57.370704 1266961 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 01:00:57.370713 1266961 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 01:00:57.370719 1266961 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 01:00:57.370726 1266961 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 01:00:57.370736 1266961 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 01:00:57.370741 1266961 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 01:00:57.370749 1266961 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 01:00:57.370756 1266961 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 01:00:57.370763 1266961 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 01:00:57.370775 1266961 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 01:00:57.370786 1266961 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 01:00:57.370795 1266961 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 01:00:57.370816 1266961 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 01:00:57.370828 1266961 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 01:00:57.370835 1266961 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 01:00:57.370843 1266961 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 01:00:57.370848 1266961 command_runner.go:130] > # Example:
	I1101 01:00:57.370856 1266961 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 01:00:57.370862 1266961 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 01:00:57.370870 1266961 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 01:00:57.370877 1266961 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 01:00:57.370883 1266961 command_runner.go:130] > # cpuset = 0
	I1101 01:00:57.370891 1266961 command_runner.go:130] > # cpushares = "0-1"
	I1101 01:00:57.370895 1266961 command_runner.go:130] > # Where:
	I1101 01:00:57.370902 1266961 command_runner.go:130] > # The workload name is workload-type.
	I1101 01:00:57.370916 1266961 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 01:00:57.370923 1266961 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 01:00:57.370930 1266961 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 01:00:57.370942 1266961 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 01:00:57.370950 1266961 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 01:00:57.370956 1266961 command_runner.go:130] > # 
	I1101 01:00:57.370965 1266961 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 01:00:57.370972 1266961 command_runner.go:130] > #
	I1101 01:00:57.370980 1266961 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 01:00:57.370987 1266961 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 01:00:57.370995 1266961 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 01:00:57.371002 1266961 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 01:00:57.371009 1266961 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 01:00:57.371016 1266961 command_runner.go:130] > [crio.image]
	I1101 01:00:57.371025 1266961 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 01:00:57.371031 1266961 command_runner.go:130] > # default_transport = "docker://"
	I1101 01:00:57.371041 1266961 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 01:00:57.371049 1266961 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 01:00:57.371058 1266961 command_runner.go:130] > # global_auth_file = ""
	I1101 01:00:57.371064 1266961 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 01:00:57.371070 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:00:57.371075 1266961 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 01:00:57.371084 1266961 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 01:00:57.371093 1266961 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 01:00:57.371101 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:00:57.371106 1266961 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 01:00:57.371113 1266961 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 01:00:57.371123 1266961 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 01:00:57.371130 1266961 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 01:00:57.371140 1266961 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 01:00:57.371145 1266961 command_runner.go:130] > # pause_command = "/pause"
	I1101 01:00:57.371153 1266961 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 01:00:57.371162 1266961 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 01:00:57.371169 1266961 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 01:00:57.371179 1266961 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 01:00:57.371186 1266961 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 01:00:57.371197 1266961 command_runner.go:130] > # signature_policy = ""
	I1101 01:00:57.371207 1266961 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 01:00:57.371218 1266961 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 01:00:57.371223 1266961 command_runner.go:130] > # changing them here.
	I1101 01:00:57.371230 1266961 command_runner.go:130] > # insecure_registries = [
	I1101 01:00:57.371234 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.371242 1266961 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 01:00:57.371253 1266961 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 01:00:57.371260 1266961 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 01:00:57.371267 1266961 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 01:00:57.371274 1266961 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 01:00:57.371282 1266961 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 01:00:57.371287 1266961 command_runner.go:130] > # CNI plugins.
	I1101 01:00:57.371294 1266961 command_runner.go:130] > [crio.network]
	I1101 01:00:57.371303 1266961 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 01:00:57.371310 1266961 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 01:00:57.371315 1266961 command_runner.go:130] > # cni_default_network = ""
	I1101 01:00:57.371322 1266961 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 01:00:57.371331 1266961 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 01:00:57.371338 1266961 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 01:00:57.371343 1266961 command_runner.go:130] > # plugin_dirs = [
	I1101 01:00:57.371350 1266961 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 01:00:57.371354 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.371364 1266961 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 01:00:57.371369 1266961 command_runner.go:130] > [crio.metrics]
	I1101 01:00:57.371375 1266961 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 01:00:57.371383 1266961 command_runner.go:130] > # enable_metrics = false
	I1101 01:00:57.371388 1266961 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 01:00:57.371394 1266961 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 01:00:57.371402 1266961 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 01:00:57.371409 1266961 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 01:00:57.371418 1266961 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 01:00:57.371427 1266961 command_runner.go:130] > # metrics_collectors = [
	I1101 01:00:57.371439 1266961 command_runner.go:130] > # 	"operations",
	I1101 01:00:57.371445 1266961 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 01:00:57.371451 1266961 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 01:00:57.371458 1266961 command_runner.go:130] > # 	"operations_errors",
	I1101 01:00:57.371463 1266961 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 01:00:57.371468 1266961 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 01:00:57.371474 1266961 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 01:00:57.371479 1266961 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 01:00:57.371484 1266961 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 01:00:57.371490 1266961 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 01:00:57.371497 1266961 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 01:00:57.371502 1266961 command_runner.go:130] > # 	"containers_oom_total",
	I1101 01:00:57.371511 1266961 command_runner.go:130] > # 	"containers_oom",
	I1101 01:00:57.371516 1266961 command_runner.go:130] > # 	"processes_defunct",
	I1101 01:00:57.371520 1266961 command_runner.go:130] > # 	"operations_total",
	I1101 01:00:57.371532 1266961 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 01:00:57.371539 1266961 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 01:00:57.371549 1266961 command_runner.go:130] > # 	"operations_errors_total",
	I1101 01:00:57.371554 1266961 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 01:00:57.371560 1266961 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 01:00:57.371565 1266961 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 01:00:57.371570 1266961 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 01:00:57.371575 1266961 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 01:00:57.371584 1266961 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 01:00:57.371588 1266961 command_runner.go:130] > # ]
	I1101 01:00:57.371594 1266961 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 01:00:57.371600 1266961 command_runner.go:130] > # metrics_port = 9090
	I1101 01:00:57.371609 1266961 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 01:00:57.371617 1266961 command_runner.go:130] > # metrics_socket = ""
	I1101 01:00:57.371623 1266961 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 01:00:57.371633 1266961 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 01:00:57.371640 1266961 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 01:00:57.371646 1266961 command_runner.go:130] > # certificate on any modification event.
	I1101 01:00:57.371651 1266961 command_runner.go:130] > # metrics_cert = ""
	I1101 01:00:57.371659 1266961 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 01:00:57.371667 1266961 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 01:00:57.371675 1266961 command_runner.go:130] > # metrics_key = ""
	I1101 01:00:57.371681 1266961 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 01:00:57.371686 1266961 command_runner.go:130] > [crio.tracing]
	I1101 01:00:57.371698 1266961 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 01:00:57.371703 1266961 command_runner.go:130] > # enable_tracing = false
	I1101 01:00:57.371713 1266961 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 01:00:57.371719 1266961 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 01:00:57.371725 1266961 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 01:00:57.371734 1266961 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 01:00:57.371745 1266961 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 01:00:57.371750 1266961 command_runner.go:130] > [crio.stats]
	I1101 01:00:57.371757 1266961 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 01:00:57.371767 1266961 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 01:00:57.371772 1266961 command_runner.go:130] > # stats_collection_period = 0
	I1101 01:00:57.371865 1266961 cni.go:84] Creating CNI manager for ""
	I1101 01:00:57.371877 1266961 cni.go:136] 1 nodes found, recommending kindnet
	I1101 01:00:57.371907 1266961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:00:57.371932 1266961 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-291182 NodeName:multinode-291182 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:00:57.372064 1266961 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-291182"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:00:57.372132 1266961 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-291182 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:00:57.372199 1266961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:00:57.381964 1266961 command_runner.go:130] > kubeadm
	I1101 01:00:57.381982 1266961 command_runner.go:130] > kubectl
	I1101 01:00:57.381987 1266961 command_runner.go:130] > kubelet
	I1101 01:00:57.383044 1266961 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:00:57.383120 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 01:00:57.393488 1266961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1101 01:00:57.414059 1266961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:00:57.434880 1266961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1101 01:00:57.455129 1266961 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1101 01:00:57.459611 1266961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:00:57.472184 1266961 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182 for IP: 192.168.58.2
	I1101 01:00:57.472213 1266961 certs.go:190] acquiring lock for shared ca certs: {Name:mk19a54d78f5cf4996fdfc5da5ee5226ef1f844f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:57.472356 1266961 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key
	I1101 01:00:57.472403 1266961 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key
	I1101 01:00:57.472447 1266961 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key
	I1101 01:00:57.472457 1266961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt with IP's: []
	I1101 01:00:58.204331 1266961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt ...
	I1101 01:00:58.204364 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt: {Name:mkf0dc7812f8142a1d47d51073e58b891b158072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:58.204586 1266961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key ...
	I1101 01:00:58.204600 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key: {Name:mkc12af60b4df7699b126082d5377c71baa3dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:58.204692 1266961 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key.cee25041
	I1101 01:00:58.204705 1266961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1101 01:00:58.691456 1266961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt.cee25041 ...
	I1101 01:00:58.691490 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt.cee25041: {Name:mkef4c4bc5a83cdeb2cd55df2000a653ed08a194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:58.691674 1266961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key.cee25041 ...
	I1101 01:00:58.691686 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key.cee25041: {Name:mk4eb6be777b26b8209deea51bab6689b4366987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:58.691778 1266961 certs.go:337] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt
	I1101 01:00:58.691853 1266961 certs.go:341] copying /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key
	I1101 01:00:58.691920 1266961 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.key
	I1101 01:00:58.691938 1266961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.crt with IP's: []
	I1101 01:00:59.077639 1266961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.crt ...
	I1101 01:00:59.077668 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.crt: {Name:mk4668297758858151ae0e8d964621287f253ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:59.077845 1266961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.key ...
	I1101 01:00:59.077858 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.key: {Name:mkb3962e130cc66f6727cfa02b8cc8138f72cf14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:00:59.077939 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1101 01:00:59.077960 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1101 01:00:59.077972 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1101 01:00:59.077987 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1101 01:00:59.077999 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 01:00:59.078015 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 01:00:59.078034 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 01:00:59.078054 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 01:00:59.078103 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem (1338 bytes)
	W1101 01:00:59.078145 1266961 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897_empty.pem, impossibly tiny 0 bytes
	I1101 01:00:59.078158 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:00:59.078193 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem (1082 bytes)
	I1101 01:00:59.078221 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:00:59.078249 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem (1675 bytes)
	I1101 01:00:59.078303 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:00:59.078335 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /usr/share/ca-certificates/12028972.pem
	I1101 01:00:59.078351 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:59.078361 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem -> /usr/share/ca-certificates/1202897.pem
	I1101 01:00:59.078947 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1101 01:00:59.106918 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 01:00:59.135101 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 01:00:59.163063 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 01:00:59.191281 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:00:59.218528 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:00:59.245744 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:00:59.273051 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:00:59.302113 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /usr/share/ca-certificates/12028972.pem (1708 bytes)
	I1101 01:00:59.329852 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:00:59.357130 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem --> /usr/share/ca-certificates/1202897.pem (1338 bytes)
	I1101 01:00:59.384007 1266961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I1101 01:00:59.404409 1266961 ssh_runner.go:195] Run: openssl version
	I1101 01:00:59.411123 1266961 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1101 01:00:59.411542 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12028972.pem && ln -fs /usr/share/ca-certificates/12028972.pem /etc/ssl/certs/12028972.pem"
	I1101 01:00:59.423029 1266961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12028972.pem
	I1101 01:00:59.427499 1266961 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  1 00:39 /usr/share/ca-certificates/12028972.pem
	I1101 01:00:59.427792 1266961 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  1 00:39 /usr/share/ca-certificates/12028972.pem
	I1101 01:00:59.427871 1266961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12028972.pem
	I1101 01:00:59.435747 1266961 command_runner.go:130] > 3ec20f2e
	I1101 01:00:59.436153 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12028972.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:00:59.447452 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:00:59.458586 1266961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:59.463136 1266961 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:59.463161 1266961 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:59.463216 1266961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:00:59.470977 1266961 command_runner.go:130] > b5213941
	I1101 01:00:59.471410 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:00:59.482452 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1202897.pem && ln -fs /usr/share/ca-certificates/1202897.pem /etc/ssl/certs/1202897.pem"
	I1101 01:00:59.493516 1266961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1202897.pem
	I1101 01:00:59.497700 1266961 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  1 00:39 /usr/share/ca-certificates/1202897.pem
	I1101 01:00:59.497992 1266961 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  1 00:39 /usr/share/ca-certificates/1202897.pem
	I1101 01:00:59.498074 1266961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1202897.pem
	I1101 01:00:59.506181 1266961 command_runner.go:130] > 51391683
	I1101 01:00:59.506691 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1202897.pem /etc/ssl/certs/51391683.0"
	I1101 01:00:59.517962 1266961 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:00:59.522279 1266961 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:00:59.522343 1266961 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:00:59.522416 1266961 kubeadm.go:404] StartCluster: {Name:multinode-291182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:00:59.522490 1266961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 01:00:59.522567 1266961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 01:00:59.565867 1266961 cri.go:89] found id: ""
	I1101 01:00:59.565983 1266961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 01:00:59.576321 1266961 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1101 01:00:59.576343 1266961 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1101 01:00:59.576351 1266961 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1101 01:00:59.576429 1266961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 01:00:59.587006 1266961 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1101 01:00:59.587122 1266961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 01:00:59.597615 1266961 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1101 01:00:59.597642 1266961 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1101 01:00:59.597652 1266961 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1101 01:00:59.597663 1266961 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:59.597692 1266961 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 01:00:59.597724 1266961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 01:00:59.653701 1266961 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1101 01:00:59.653729 1266961 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1101 01:00:59.654140 1266961 kubeadm.go:322] [preflight] Running pre-flight checks
	I1101 01:00:59.654182 1266961 command_runner.go:130] > [preflight] Running pre-flight checks
	I1101 01:00:59.704254 1266961 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1101 01:00:59.704280 1266961 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1101 01:00:59.704332 1266961 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1101 01:00:59.704342 1266961 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1101 01:00:59.704377 1266961 kubeadm.go:322] OS: Linux
	I1101 01:00:59.704386 1266961 command_runner.go:130] > OS: Linux
	I1101 01:00:59.704434 1266961 kubeadm.go:322] CGROUPS_CPU: enabled
	I1101 01:00:59.704445 1266961 command_runner.go:130] > CGROUPS_CPU: enabled
	I1101 01:00:59.704491 1266961 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1101 01:00:59.704499 1266961 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1101 01:00:59.704543 1266961 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1101 01:00:59.704554 1266961 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1101 01:00:59.704602 1266961 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1101 01:00:59.704616 1266961 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1101 01:00:59.704662 1266961 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1101 01:00:59.704672 1266961 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1101 01:00:59.704716 1266961 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1101 01:00:59.704724 1266961 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1101 01:00:59.704765 1266961 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1101 01:00:59.704773 1266961 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1101 01:00:59.704817 1266961 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1101 01:00:59.704824 1266961 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1101 01:00:59.704874 1266961 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1101 01:00:59.704886 1266961 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1101 01:00:59.791143 1266961 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:00:59.791169 1266961 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 01:00:59.791258 1266961 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:00:59.791282 1266961 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 01:00:59.791369 1266961 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:00:59.791376 1266961 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1101 01:01:00.057772 1266961 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:01:00.062930 1266961 out.go:204]   - Generating certificates and keys ...
	I1101 01:01:00.058014 1266961 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 01:01:00.063158 1266961 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1101 01:01:00.063205 1266961 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1101 01:01:00.063322 1266961 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1101 01:01:00.063354 1266961 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1101 01:01:00.370534 1266961 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 01:01:00.370557 1266961 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 01:01:00.628951 1266961 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1101 01:01:00.628974 1266961 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1101 01:01:00.937832 1266961 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1101 01:01:00.937856 1266961 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1101 01:01:01.476834 1266961 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1101 01:01:01.476865 1266961 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1101 01:01:02.306352 1266961 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1101 01:01:02.306377 1266961 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1101 01:01:02.306697 1266961 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-291182] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1101 01:01:02.306713 1266961 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-291182] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1101 01:01:02.629043 1266961 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1101 01:01:02.629068 1266961 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1101 01:01:02.629540 1266961 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-291182] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1101 01:01:02.629577 1266961 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-291182] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1101 01:01:03.048685 1266961 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 01:01:03.048709 1266961 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 01:01:03.632544 1266961 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 01:01:03.632567 1266961 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 01:01:04.022606 1266961 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1101 01:01:04.022629 1266961 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1101 01:01:04.022968 1266961 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:01:04.022980 1266961 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 01:01:04.568593 1266961 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:01:04.568626 1266961 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 01:01:05.110567 1266961 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:01:05.110591 1266961 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 01:01:05.455415 1266961 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:01:05.455440 1266961 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 01:01:06.061041 1266961 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:01:06.061066 1266961 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 01:01:06.063058 1266961 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:01:06.063084 1266961 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 01:01:06.066598 1266961 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:01:06.069259 1266961 out.go:204]   - Booting up control plane ...
	I1101 01:01:06.066735 1266961 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 01:01:06.069372 1266961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:01:06.069386 1266961 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 01:01:06.069507 1266961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:01:06.069514 1266961 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 01:01:06.070167 1266961 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:01:06.070196 1266961 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 01:01:06.081799 1266961 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:01:06.081821 1266961 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:01:06.082595 1266961 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:01:06.082634 1266961 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:01:06.082947 1266961 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1101 01:01:06.082965 1266961 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 01:01:06.181299 1266961 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:01:06.181324 1266961 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1101 01:01:13.184155 1266961 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002724 seconds
	I1101 01:01:13.184189 1266961 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.002724 seconds
	I1101 01:01:13.184295 1266961 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:01:13.184300 1266961 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 01:01:13.229302 1266961 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:01:13.229332 1266961 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 01:01:13.773591 1266961 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:01:13.773630 1266961 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1101 01:01:13.773803 1266961 kubeadm.go:322] [mark-control-plane] Marking the node multinode-291182 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:01:13.773812 1266961 command_runner.go:130] > [mark-control-plane] Marking the node multinode-291182 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 01:01:14.284948 1266961 kubeadm.go:322] [bootstrap-token] Using token: 709pui.kesguyvi7w5y1wwh
	I1101 01:01:14.287020 1266961 out.go:204]   - Configuring RBAC rules ...
	I1101 01:01:14.285068 1266961 command_runner.go:130] > [bootstrap-token] Using token: 709pui.kesguyvi7w5y1wwh
	I1101 01:01:14.287135 1266961 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:01:14.287150 1266961 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 01:01:14.291806 1266961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:01:14.291823 1266961 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 01:01:14.298899 1266961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:01:14.298917 1266961 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 01:01:14.302577 1266961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:01:14.302599 1266961 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 01:01:14.307629 1266961 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:01:14.307653 1266961 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 01:01:14.311242 1266961 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:01:14.311262 1266961 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 01:01:14.324251 1266961 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:01:14.324283 1266961 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 01:01:14.575407 1266961 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1101 01:01:14.575427 1266961 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1101 01:01:14.730934 1266961 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1101 01:01:14.730961 1266961 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1101 01:01:14.732513 1266961 kubeadm.go:322] 
	I1101 01:01:14.732584 1266961 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1101 01:01:14.732596 1266961 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1101 01:01:14.732608 1266961 kubeadm.go:322] 
	I1101 01:01:14.732699 1266961 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1101 01:01:14.732705 1266961 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1101 01:01:14.732709 1266961 kubeadm.go:322] 
	I1101 01:01:14.732739 1266961 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1101 01:01:14.732744 1266961 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1101 01:01:14.732799 1266961 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:01:14.732803 1266961 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 01:01:14.732857 1266961 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:01:14.732861 1266961 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 01:01:14.732865 1266961 kubeadm.go:322] 
	I1101 01:01:14.732915 1266961 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1101 01:01:14.732920 1266961 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1101 01:01:14.732924 1266961 kubeadm.go:322] 
	I1101 01:01:14.732972 1266961 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:01:14.732976 1266961 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 01:01:14.733023 1266961 kubeadm.go:322] 
	I1101 01:01:14.733073 1266961 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1101 01:01:14.733078 1266961 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1101 01:01:14.733161 1266961 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:01:14.733167 1266961 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 01:01:14.733230 1266961 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:01:14.733234 1266961 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 01:01:14.733238 1266961 kubeadm.go:322] 
	I1101 01:01:14.733330 1266961 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:01:14.733335 1266961 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1101 01:01:14.733406 1266961 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1101 01:01:14.733415 1266961 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1101 01:01:14.733419 1266961 kubeadm.go:322] 
	I1101 01:01:14.733502 1266961 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 709pui.kesguyvi7w5y1wwh \
	I1101 01:01:14.733510 1266961 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 709pui.kesguyvi7w5y1wwh \
	I1101 01:01:14.733605 1266961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 \
	I1101 01:01:14.733613 1266961 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 \
	I1101 01:01:14.733632 1266961 kubeadm.go:322] 	--control-plane 
	I1101 01:01:14.733637 1266961 command_runner.go:130] > 	--control-plane 
	I1101 01:01:14.733641 1266961 kubeadm.go:322] 
	I1101 01:01:14.733723 1266961 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:01:14.733729 1266961 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1101 01:01:14.733735 1266961 kubeadm.go:322] 
	I1101 01:01:14.733818 1266961 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 709pui.kesguyvi7w5y1wwh \
	I1101 01:01:14.733823 1266961 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 709pui.kesguyvi7w5y1wwh \
	I1101 01:01:14.733920 1266961 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 
	I1101 01:01:14.733925 1266961 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 
	I1101 01:01:14.738328 1266961 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1101 01:01:14.738350 1266961 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1101 01:01:14.738451 1266961 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:01:14.738460 1266961 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:01:14.738477 1266961 cni.go:84] Creating CNI manager for ""
	I1101 01:01:14.738484 1266961 cni.go:136] 1 nodes found, recommending kindnet
	I1101 01:01:14.741147 1266961 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1101 01:01:14.743094 1266961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 01:01:14.764289 1266961 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 01:01:14.764310 1266961 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1101 01:01:14.764318 1266961 command_runner.go:130] > Device: 3ah/58d	Inode: 1827008     Links: 1
	I1101 01:01:14.764340 1266961 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 01:01:14.764352 1266961 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1101 01:01:14.764358 1266961 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1101 01:01:14.764370 1266961 command_runner.go:130] > Change: 2023-11-01 00:32:33.764020799 +0000
	I1101 01:01:14.764377 1266961 command_runner.go:130] >  Birth: 2023-11-01 00:32:33.720021119 +0000
	I1101 01:01:14.765445 1266961 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 01:01:14.765464 1266961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 01:01:14.806031 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 01:01:15.649694 1266961 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1101 01:01:15.658315 1266961 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1101 01:01:15.668155 1266961 command_runner.go:130] > serviceaccount/kindnet created
	I1101 01:01:15.694609 1266961 command_runner.go:130] > daemonset.apps/kindnet created
	I1101 01:01:15.700554 1266961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 01:01:15.700701 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:15.700772 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0-beta.0 minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9 minikube.k8s.io/name=multinode-291182 minikube.k8s.io/updated_at=2023_11_01T01_01_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:15.876583 1266961 command_runner.go:130] > node/multinode-291182 labeled
	I1101 01:01:15.880013 1266961 command_runner.go:130] > -16
	I1101 01:01:15.880037 1266961 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1101 01:01:15.880058 1266961 ops.go:34] apiserver oom_adj: -16
	I1101 01:01:15.880121 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:15.976481 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:15.976578 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:16.070213 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:16.571079 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:16.672466 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:17.071069 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:17.159899 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:17.571373 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:17.662767 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:18.070989 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:18.171738 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:18.571385 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:18.660575 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:19.070937 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:19.161178 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:19.570784 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:19.660410 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:20.070651 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:20.162430 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:20.570443 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:20.669309 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:21.070588 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:21.157834 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:21.570429 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:21.664769 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:22.070374 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:22.160379 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:22.571023 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:22.658032 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:23.071225 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:23.154263 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:23.570456 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:23.660388 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:24.071345 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:24.162667 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:24.571266 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:24.662733 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:25.070988 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:25.173540 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:25.571119 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:25.662095 1266961 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1101 01:01:26.070415 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 01:01:26.179609 1266961 command_runner.go:130] > NAME      SECRETS   AGE
	I1101 01:01:26.179630 1266961 command_runner.go:130] > default   0         0s
	I1101 01:01:26.183788 1266961 kubeadm.go:1081] duration metric: took 10.483126366s to wait for elevateKubeSystemPrivileges.
	I1101 01:01:26.183827 1266961 kubeadm.go:406] StartCluster complete in 26.661426853s
	I1101 01:01:26.183845 1266961 settings.go:142] acquiring lock: {Name:mke36bce3f316e572c27d9ade5690ad307116f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:26.183946 1266961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:01:26.184732 1266961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17486-1197516/kubeconfig: {Name:mk54047efde1577abb33547e94416477b8fd3071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:01:26.185005 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 01:01:26.185275 1266961 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:01:26.185411 1266961 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:01:26.185452 1266961 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1101 01:01:26.185528 1266961 addons.go:69] Setting storage-provisioner=true in profile "multinode-291182"
	I1101 01:01:26.185542 1266961 addons.go:231] Setting addon storage-provisioner=true in "multinode-291182"
	I1101 01:01:26.185596 1266961 host.go:66] Checking if "multinode-291182" exists ...
	I1101 01:01:26.185741 1266961 kapi.go:59] client config for multinode-291182: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 01:01:26.186071 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:01:26.187149 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 01:01:26.187168 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:26.187177 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:26.187187 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:26.187442 1266961 cert_rotation.go:137] Starting client certificate rotation controller
	I1101 01:01:26.187891 1266961 addons.go:69] Setting default-storageclass=true in profile "multinode-291182"
	I1101 01:01:26.187919 1266961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-291182"
	I1101 01:01:26.188289 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:01:26.228852 1266961 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I1101 01:01:26.228874 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:26.228905 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:26.228913 1266961 round_trippers.go:580]     Content-Length: 291
	I1101 01:01:26.228927 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:26 GMT
	I1101 01:01:26.228934 1266961 round_trippers.go:580]     Audit-Id: 53c708f6-6f71-41bd-9695-41a70d09761e
	I1101 01:01:26.228940 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:26.228946 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:26.228952 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:26.228991 1266961 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e4f95f0-392d-400c-bda6-a37388e1041b","resourceVersion":"269","creationTimestamp":"2023-11-01T01:01:14Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 01:01:26.229377 1266961 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e4f95f0-392d-400c-bda6-a37388e1041b","resourceVersion":"269","creationTimestamp":"2023-11-01T01:01:14Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 01:01:26.229423 1266961 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 01:01:26.229430 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:26.229437 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:26.229448 1266961 round_trippers.go:473]     Content-Type: application/json
	I1101 01:01:26.229457 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:26.244891 1266961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 01:01:26.244679 1266961 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:01:26.247123 1266961 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:01:26.247139 1266961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 01:01:26.247207 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:01:26.247372 1266961 kapi.go:59] client config for multinode-291182: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 01:01:26.247654 1266961 addons.go:231] Setting addon default-storageclass=true in "multinode-291182"
	I1101 01:01:26.247693 1266961 host.go:66] Checking if "multinode-291182" exists ...
	I1101 01:01:26.248185 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:01:26.270768 1266961 round_trippers.go:574] Response Status: 200 OK in 41 milliseconds
	I1101 01:01:26.270795 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:26.270805 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:26 GMT
	I1101 01:01:26.270812 1266961 round_trippers.go:580]     Audit-Id: 06097396-0a6c-466e-8420-232cf9f0b4fd
	I1101 01:01:26.270818 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:26.270825 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:26.270835 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:26.270842 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:26.270853 1266961 round_trippers.go:580]     Content-Length: 291
	I1101 01:01:26.270883 1266961 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e4f95f0-392d-400c-bda6-a37388e1041b","resourceVersion":"338","creationTimestamp":"2023-11-01T01:01:14Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 01:01:26.271049 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 01:01:26.271065 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:26.271073 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:26.271087 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:26.286897 1266961 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 01:01:26.286923 1266961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 01:01:26.287010 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:01:26.317641 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:01:26.340528 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:01:26.347976 1266961 round_trippers.go:574] Response Status: 200 OK in 76 milliseconds
	I1101 01:01:26.347996 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:26.348005 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:26.348012 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:26.348019 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:26.348025 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:26.348037 1266961 round_trippers.go:580]     Content-Length: 291
	I1101 01:01:26.348049 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:26 GMT
	I1101 01:01:26.348055 1266961 round_trippers.go:580]     Audit-Id: e80621df-0961-4774-93de-227f618ae3ea
	I1101 01:01:26.352658 1266961 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e4f95f0-392d-400c-bda6-a37388e1041b","resourceVersion":"338","creationTimestamp":"2023-11-01T01:01:14Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1101 01:01:26.352783 1266961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-291182" context rescaled to 1 replicas
	I1101 01:01:26.352819 1266961 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 01:01:26.356123 1266961 out.go:177] * Verifying Kubernetes components...
	I1101 01:01:26.358132 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:01:26.407281 1266961 command_runner.go:130] > apiVersion: v1
	I1101 01:01:26.407309 1266961 command_runner.go:130] > data:
	I1101 01:01:26.407315 1266961 command_runner.go:130] >   Corefile: |
	I1101 01:01:26.407320 1266961 command_runner.go:130] >     .:53 {
	I1101 01:01:26.407325 1266961 command_runner.go:130] >         errors
	I1101 01:01:26.407336 1266961 command_runner.go:130] >         health {
	I1101 01:01:26.407345 1266961 command_runner.go:130] >            lameduck 5s
	I1101 01:01:26.407353 1266961 command_runner.go:130] >         }
	I1101 01:01:26.407362 1266961 command_runner.go:130] >         ready
	I1101 01:01:26.407370 1266961 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1101 01:01:26.407381 1266961 command_runner.go:130] >            pods insecure
	I1101 01:01:26.407388 1266961 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1101 01:01:26.407400 1266961 command_runner.go:130] >            ttl 30
	I1101 01:01:26.407405 1266961 command_runner.go:130] >         }
	I1101 01:01:26.407414 1266961 command_runner.go:130] >         prometheus :9153
	I1101 01:01:26.407423 1266961 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1101 01:01:26.407431 1266961 command_runner.go:130] >            max_concurrent 1000
	I1101 01:01:26.407436 1266961 command_runner.go:130] >         }
	I1101 01:01:26.407448 1266961 command_runner.go:130] >         cache 30
	I1101 01:01:26.407453 1266961 command_runner.go:130] >         loop
	I1101 01:01:26.407458 1266961 command_runner.go:130] >         reload
	I1101 01:01:26.407466 1266961 command_runner.go:130] >         loadbalance
	I1101 01:01:26.407471 1266961 command_runner.go:130] >     }
	I1101 01:01:26.407483 1266961 command_runner.go:130] > kind: ConfigMap
	I1101 01:01:26.407488 1266961 command_runner.go:130] > metadata:
	I1101 01:01:26.407502 1266961 command_runner.go:130] >   creationTimestamp: "2023-11-01T01:01:14Z"
	I1101 01:01:26.407509 1266961 command_runner.go:130] >   name: coredns
	I1101 01:01:26.407515 1266961 command_runner.go:130] >   namespace: kube-system
	I1101 01:01:26.407522 1266961 command_runner.go:130] >   resourceVersion: "265"
	I1101 01:01:26.407532 1266961 command_runner.go:130] >   uid: 010fe84a-f2fa-40e1-b065-a5d03213bc10
	I1101 01:01:26.410840 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 01:01:26.411300 1266961 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:01:26.411674 1266961 kapi.go:59] client config for multinode-291182: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 01:01:26.411966 1266961 node_ready.go:35] waiting up to 6m0s for node "multinode-291182" to be "Ready" ...
	I1101 01:01:26.412060 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:26.412071 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:26.412081 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:26.412093 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:26.420925 1266961 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1101 01:01:26.421015 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:26.421025 1266961 round_trippers.go:580]     Audit-Id: e6371903-b563-4bc2-bbc6-9637a94d550f
	I1101 01:01:26.421035 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:26.421042 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:26.421050 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:26.421057 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:26.421074 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:26 GMT
	I1101 01:01:26.427361 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"336","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-
01T01:01:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno [truncated 6126 chars]
	I1101 01:01:26.428148 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:26.428169 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:26.428180 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:26.428186 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:26.442059 1266961 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1101 01:01:26.442082 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:26.442091 1266961 round_trippers.go:580]     Audit-Id: 6434c232-2c72-4964-a5c3-aa95a9cafe31
	I1101 01:01:26.442098 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:26.442105 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:26.442111 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:26.442120 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:26.442132 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:26 GMT
	I1101 01:01:26.459497 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"336","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-
01T01:01:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno [truncated 6126 chars]
	I1101 01:01:26.512869 1266961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 01:01:26.552267 1266961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 01:01:26.960512 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:26.960542 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:26.960553 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:26.960567 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:27.048671 1266961 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I1101 01:01:27.048742 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:27.048767 1266961 round_trippers.go:580]     Audit-Id: e168159f-fddd-45db-842c-38b9eed901db
	I1101 01:01:27.048792 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:27.048827 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:27.048851 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:27.048874 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:27.048911 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:27 GMT
	I1101 01:01:27.049524 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:27.139312 1266961 command_runner.go:130] > configmap/coredns replaced
	I1101 01:01:27.141038 1266961 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1101 01:01:27.321710 1266961 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1101 01:01:27.329473 1266961 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1101 01:01:27.348245 1266961 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1101 01:01:27.358612 1266961 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1101 01:01:27.366082 1266961 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1101 01:01:27.376798 1266961 command_runner.go:130] > pod/storage-provisioner created
	I1101 01:01:27.384793 1266961 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1101 01:01:27.384962 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1101 01:01:27.385022 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:27.385045 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:27.385073 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:27.392611 1266961 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1101 01:01:27.392679 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:27.392702 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:27.392726 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:27.392766 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:27.392794 1266961 round_trippers.go:580]     Content-Length: 1273
	I1101 01:01:27.392818 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:27 GMT
	I1101 01:01:27.392849 1266961 round_trippers.go:580]     Audit-Id: 7d1c9889-a048-4abe-8042-87a5c89e43a5
	I1101 01:01:27.392874 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:27.392973 1266961 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"402"},"items":[{"metadata":{"name":"standard","uid":"f70304e9-49a2-4ac8-a1f1-b6ea6a689f5c","resourceVersion":"358","creationTimestamp":"2023-11-01T01:01:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-01T01:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1101 01:01:27.393518 1266961 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f70304e9-49a2-4ac8-a1f1-b6ea6a689f5c","resourceVersion":"358","creationTimestamp":"2023-11-01T01:01:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-01T01:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1101 01:01:27.393615 1266961 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1101 01:01:27.393639 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:27.393678 1266961 round_trippers.go:473]     Content-Type: application/json
	I1101 01:01:27.393703 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:27.393724 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:27.398156 1266961 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 01:01:27.398215 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:27.398238 1266961 round_trippers.go:580]     Content-Length: 1220
	I1101 01:01:27.398253 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:27 GMT
	I1101 01:01:27.398261 1266961 round_trippers.go:580]     Audit-Id: 3cf30ed6-b0bc-47b9-861f-fe87e9104127
	I1101 01:01:27.398267 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:27.398274 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:27.398289 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:27.398298 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:27.398325 1266961 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f70304e9-49a2-4ac8-a1f1-b6ea6a689f5c","resourceVersion":"358","creationTimestamp":"2023-11-01T01:01:26Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-01T01:01:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1101 01:01:27.401400 1266961 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1101 01:01:27.403290 1266961 addons.go:502] enable addons completed in 1.217832685s: enabled=[storage-provisioner default-storageclass]
	I1101 01:01:27.460253 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:27.460274 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:27.460284 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:27.460292 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:27.469515 1266961 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1101 01:01:27.469585 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:27.469607 1266961 round_trippers.go:580]     Audit-Id: 16d0ca66-da9b-4f07-8a4a-754c11d1ad5b
	I1101 01:01:27.469629 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:27.469664 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:27.469692 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:27.469715 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:27.469742 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:27 GMT
	I1101 01:01:27.470229 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:27.960231 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:27.960262 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:27.960272 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:27.960279 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:27.962899 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:27.962954 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:27.962962 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:27.962972 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:27 GMT
	I1101 01:01:27.962987 1266961 round_trippers.go:580]     Audit-Id: 67121e41-4e78-4fe6-8396-5b33e51d0cfa
	I1101 01:01:27.963004 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:27.963015 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:27.963021 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:27.963119 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:28.460629 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:28.460650 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:28.460666 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:28.460674 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:28.463046 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:28.463075 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:28.463084 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:28.463091 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:28 GMT
	I1101 01:01:28.463097 1266961 round_trippers.go:580]     Audit-Id: f960870e-68d1-431c-9c53-5e8200f44ca6
	I1101 01:01:28.463107 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:28.463113 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:28.463119 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:28.463439 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:28.463892 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:28.960582 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:28.960602 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:28.960613 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:28.960620 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:28.963063 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:28.963117 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:28.963140 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:28.963163 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:28.963198 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:28.963221 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:28.963236 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:28 GMT
	I1101 01:01:28.963243 1266961 round_trippers.go:580]     Audit-Id: ab421110-bec8-472e-8f1a-80229a6dfcbc
	I1101 01:01:28.963382 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:29.460233 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:29.460255 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:29.460268 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:29.460293 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:29.462727 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:29.462770 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:29.462778 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:29.462785 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:29.462791 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:29.462797 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:29.462804 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:29 GMT
	I1101 01:01:29.462819 1266961 round_trippers.go:580]     Audit-Id: c939c5f9-7126-4828-a293-7fc3dbea54f9
	I1101 01:01:29.462971 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:29.960411 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:29.960440 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:29.960451 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:29.960460 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:29.962920 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:29.962942 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:29.962952 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:29.962958 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:29.962964 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:29.962971 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:29.962977 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:29 GMT
	I1101 01:01:29.962988 1266961 round_trippers.go:580]     Audit-Id: a0a6c6a9-a5de-4554-af47-9e864a4307bf
	I1101 01:01:29.963108 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:30.460191 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:30.460214 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:30.460225 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:30.460232 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:30.462633 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:30.462657 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:30.462665 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:30.462672 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:30 GMT
	I1101 01:01:30.462679 1266961 round_trippers.go:580]     Audit-Id: aa358c85-3543-4aab-8cfb-cb60887fd8cd
	I1101 01:01:30.462685 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:30.462694 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:30.462701 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:30.462878 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:30.961031 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:30.961054 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:30.961064 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:30.961072 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:30.963482 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:30.963547 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:30.963570 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:30.963628 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:30.963654 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:30 GMT
	I1101 01:01:30.963670 1266961 round_trippers.go:580]     Audit-Id: e0a3ed4c-185a-4fe9-a204-8e84d8c52948
	I1101 01:01:30.963677 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:30.963684 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:30.963800 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:30.964219 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:31.460218 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:31.460242 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:31.460252 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:31.460259 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:31.462686 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:31.462715 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:31.462723 1266961 round_trippers.go:580]     Audit-Id: 7b1c6bb3-26f1-4239-805a-8dfad95e86fb
	I1101 01:01:31.462730 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:31.462737 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:31.462743 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:31.462750 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:31.462756 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:31 GMT
	I1101 01:01:31.463350 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:31.960181 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:31.960212 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:31.960226 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:31.960233 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:31.962643 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:31.962665 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:31.962674 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:31.962681 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:31 GMT
	I1101 01:01:31.962688 1266961 round_trippers.go:580]     Audit-Id: 7cb2b6ee-e159-4af9-82e4-3272f8445c50
	I1101 01:01:31.962694 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:31.962700 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:31.962706 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:31.962934 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:32.460238 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:32.460260 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:32.460270 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:32.460277 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:32.462770 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:32.462790 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:32.462799 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:32.462805 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:32.462812 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:32.462818 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:32.462825 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:32 GMT
	I1101 01:01:32.462831 1266961 round_trippers.go:580]     Audit-Id: baef5d54-5064-4ec1-b675-f236b5f9e6d1
	I1101 01:01:32.463043 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:32.960465 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:32.960488 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:32.960499 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:32.960506 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:32.962950 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:32.962974 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:32.962983 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:32.962992 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:32.962998 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:32 GMT
	I1101 01:01:32.963005 1266961 round_trippers.go:580]     Audit-Id: a9fc86bf-4740-4a97-9f9f-b74434e5efc3
	I1101 01:01:32.963011 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:32.963022 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:32.963123 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:33.460181 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:33.460204 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:33.460214 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:33.460222 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:33.462734 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:33.462757 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:33.462766 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:33.462773 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:33.462781 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:33.462788 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:33.462793 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:33 GMT
	I1101 01:01:33.462799 1266961 round_trippers.go:580]     Audit-Id: 7a8e5998-88f0-4a82-8fc1-4f2730a5f404
	I1101 01:01:33.462937 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:33.463325 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:33.961090 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:33.961117 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:33.961127 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:33.961135 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:33.963583 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:33.963607 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:33.963615 1266961 round_trippers.go:580]     Audit-Id: 68000f5c-b7a5-41d5-8111-f7ba51991c46
	I1101 01:01:33.963622 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:33.963628 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:33.963634 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:33.963641 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:33.963648 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:33 GMT
	I1101 01:01:33.963899 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:34.461033 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:34.461055 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:34.461064 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:34.461072 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:34.463483 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:34.463514 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:34.463522 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:34.463529 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:34.463536 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:34 GMT
	I1101 01:01:34.463542 1266961 round_trippers.go:580]     Audit-Id: 602b872c-4ee4-4923-bd6a-d26332285540
	I1101 01:01:34.463552 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:34.463558 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:34.463801 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:34.960847 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:34.960872 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:34.960882 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:34.960890 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:34.963318 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:34.963343 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:34.963352 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:34.963359 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:34 GMT
	I1101 01:01:34.963366 1266961 round_trippers.go:580]     Audit-Id: 8403919d-71c5-401c-bc53-bd4adf982dbe
	I1101 01:01:34.963372 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:34.963378 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:34.963388 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:34.963772 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:35.460522 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:35.460543 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:35.460554 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:35.460562 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:35.463013 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:35.463033 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:35.463042 1266961 round_trippers.go:580]     Audit-Id: 982f3a3c-5e1c-4245-ae64-fa05c6714075
	I1101 01:01:35.463048 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:35.463054 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:35.463060 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:35.463067 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:35.463074 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:35 GMT
	I1101 01:01:35.463197 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:35.463585 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:35.960235 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:35.960257 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:35.960268 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:35.960280 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:35.962740 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:35.962765 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:35.962774 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:35.962781 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:35.962787 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:35 GMT
	I1101 01:01:35.962793 1266961 round_trippers.go:580]     Audit-Id: 8c4861f4-d93b-48df-980a-0aecb619f4d2
	I1101 01:01:35.962799 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:35.962805 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:35.962962 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:36.460113 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:36.460134 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:36.460145 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:36.460152 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:36.462592 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:36.462611 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:36.462620 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:36.462626 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:36 GMT
	I1101 01:01:36.462632 1266961 round_trippers.go:580]     Audit-Id: 398df7f4-d8cf-46ca-a650-84be40d55821
	I1101 01:01:36.462638 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:36.462644 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:36.462650 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:36.462839 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:36.960441 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:36.960467 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:36.960483 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:36.960491 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:36.963120 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:36.963147 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:36.963156 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:36 GMT
	I1101 01:01:36.963163 1266961 round_trippers.go:580]     Audit-Id: 42a74db5-79a1-4ac9-98ed-f4c6d1bf1cc1
	I1101 01:01:36.963169 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:36.963176 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:36.963185 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:36.963201 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:36.963478 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:37.460418 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:37.460441 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:37.460452 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:37.460460 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:37.463013 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:37.463033 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:37.463042 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:37 GMT
	I1101 01:01:37.463048 1266961 round_trippers.go:580]     Audit-Id: 0f67c47a-2f50-45ee-b765-10eed1eb4757
	I1101 01:01:37.463055 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:37.463061 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:37.463067 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:37.463073 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:37.463236 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:37.463634 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:37.960853 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:37.960878 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:37.960888 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:37.960896 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:37.963437 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:37.963462 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:37.963471 1266961 round_trippers.go:580]     Audit-Id: f2ad26e3-8f85-4cb2-9141-307c91c67eea
	I1101 01:01:37.963478 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:37.963484 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:37.963492 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:37.963498 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:37.963509 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:37 GMT
	I1101 01:01:37.963612 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:38.460728 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:38.460753 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:38.460764 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:38.460776 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:38.463292 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:38.463317 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:38.463326 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:38.463334 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:38 GMT
	I1101 01:01:38.463340 1266961 round_trippers.go:580]     Audit-Id: 6792e518-ec3e-4419-9406-59831f39f8f8
	I1101 01:01:38.463346 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:38.463356 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:38.463366 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:38.463762 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:38.960377 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:38.960403 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:38.960414 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:38.960421 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:38.963023 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:38.963043 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:38.963051 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:38.963058 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:38.963064 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:38.963070 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:38 GMT
	I1101 01:01:38.963076 1266961 round_trippers.go:580]     Audit-Id: 3da49001-c8d1-4c26-bc68-3c20fab962bf
	I1101 01:01:38.963082 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:38.963169 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:39.460169 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:39.460193 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:39.460204 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:39.460217 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:39.463064 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:39.463087 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:39.463096 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:39.463103 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:39 GMT
	I1101 01:01:39.463109 1266961 round_trippers.go:580]     Audit-Id: 5d594bd9-fa64-42cf-b514-16374129cc3f
	I1101 01:01:39.463116 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:39.463122 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:39.463128 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:39.463238 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:39.960879 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:39.960902 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:39.960912 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:39.960920 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:39.963340 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:39.963361 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:39.963370 1266961 round_trippers.go:580]     Audit-Id: fec6dfa1-c169-4b04-a3fb-9228728ff8a8
	I1101 01:01:39.963376 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:39.963383 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:39.963389 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:39.963407 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:39.963414 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:39 GMT
	I1101 01:01:39.963609 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:39.964017 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:40.460780 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:40.460804 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:40.460815 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:40.460822 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:40.463224 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:40.463247 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:40.463255 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:40.463262 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:40.463269 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:40.463275 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:40.463282 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:40 GMT
	I1101 01:01:40.463292 1266961 round_trippers.go:580]     Audit-Id: a3386ad1-eae5-4bbf-a3e2-0fee1f94f917
	I1101 01:01:40.463484 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:40.960083 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:40.960109 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:40.960124 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:40.960131 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:40.962590 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:40.962609 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:40.962618 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:40.962624 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:40.962631 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:40 GMT
	I1101 01:01:40.962637 1266961 round_trippers.go:580]     Audit-Id: 7ced80ba-b09b-4eb0-9e1d-3ba67d11c15c
	I1101 01:01:40.962643 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:40.962649 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:40.962805 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:41.460475 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:41.460505 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:41.460519 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:41.460528 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:41.463062 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:41.463086 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:41.463094 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:41.463101 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:41.463108 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:41 GMT
	I1101 01:01:41.463114 1266961 round_trippers.go:580]     Audit-Id: 7a13dec1-9c14-4a1e-b05e-d45aa1095772
	I1101 01:01:41.463121 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:41.463130 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:41.463405 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:41.961089 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:41.961112 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:41.961122 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:41.961129 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:41.963527 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:41.963545 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:41.963554 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:41.963561 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:41.963567 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:41 GMT
	I1101 01:01:41.963573 1266961 round_trippers.go:580]     Audit-Id: a87c1d0a-5c31-4725-8aeb-c6ea873f5d0d
	I1101 01:01:41.963579 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:41.963585 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:41.963704 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:41.964112 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:42.461069 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:42.461098 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:42.461108 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:42.461115 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:42.463540 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:42.463562 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:42.463570 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:42 GMT
	I1101 01:01:42.463579 1266961 round_trippers.go:580]     Audit-Id: 19ff82ef-5782-42fc-b5ef-ffebaac38042
	I1101 01:01:42.463585 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:42.463591 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:42.463597 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:42.463604 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:42.464019 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:42.960324 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:42.960350 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:42.960365 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:42.960372 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:42.962919 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:42.962944 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:42.962953 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:42 GMT
	I1101 01:01:42.962959 1266961 round_trippers.go:580]     Audit-Id: 5965b7e6-9af5-459a-9bd0-e2fa5b6c3c13
	I1101 01:01:42.962966 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:42.962972 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:42.962979 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:42.962990 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:42.963125 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:43.460177 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:43.460198 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:43.460209 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:43.460216 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:43.462884 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:43.462902 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:43.462911 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:43.462917 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:43.462924 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:43.462931 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:43 GMT
	I1101 01:01:43.462937 1266961 round_trippers.go:580]     Audit-Id: d3641f3c-48bc-4ec1-b34c-7f6fa1051423
	I1101 01:01:43.462943 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:43.463095 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:43.960731 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:43.960756 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:43.960767 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:43.960774 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:43.963219 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:43.963244 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:43.963253 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:43 GMT
	I1101 01:01:43.963260 1266961 round_trippers.go:580]     Audit-Id: f25c0b6d-636a-418e-8a51-0aeee03d6a64
	I1101 01:01:43.963267 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:43.963273 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:43.963279 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:43.963294 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:43.963428 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:44.460494 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:44.460512 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:44.460522 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:44.460529 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:44.462946 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:44.462965 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:44.462974 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:44.462981 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:44 GMT
	I1101 01:01:44.462987 1266961 round_trippers.go:580]     Audit-Id: 4643c276-a925-4594-837c-3e1d42e0150e
	I1101 01:01:44.462993 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:44.463001 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:44.463007 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:44.463179 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:44.463651 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:44.960808 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:44.960832 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:44.960842 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:44.960852 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:44.963360 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:44.963380 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:44.963389 1266961 round_trippers.go:580]     Audit-Id: b15195d2-75df-4ca3-9337-f5fc02e9aeb2
	I1101 01:01:44.963395 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:44.963402 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:44.963408 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:44.963414 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:44.963421 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:44 GMT
	I1101 01:01:44.963534 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:45.460443 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:45.460466 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:45.460476 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:45.460489 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:45.463243 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:45.463269 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:45.463277 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:45 GMT
	I1101 01:01:45.463284 1266961 round_trippers.go:580]     Audit-Id: 86df2293-c66d-45f7-ad2f-1611db35eb9b
	I1101 01:01:45.463290 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:45.463296 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:45.463302 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:45.463313 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:45.463445 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:45.960179 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:45.960204 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:45.960219 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:45.960226 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:45.962726 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:45.962750 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:45.962758 1266961 round_trippers.go:580]     Audit-Id: 8ba426d0-c12e-41e6-9c7f-64467bd2078f
	I1101 01:01:45.962765 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:45.962771 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:45.962777 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:45.962784 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:45.962790 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:45 GMT
	I1101 01:01:45.962958 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:46.460161 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:46.460182 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:46.460193 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:46.460200 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:46.462695 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:46.462716 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:46.462724 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:46.462730 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:46.462737 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:46.462743 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:46 GMT
	I1101 01:01:46.462749 1266961 round_trippers.go:580]     Audit-Id: a06d2b12-30e8-4dd6-bf57-e825b2120327
	I1101 01:01:46.462755 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:46.463079 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:46.960749 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:46.960774 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:46.960790 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:46.960797 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:46.963223 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:46.963249 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:46.963258 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:46.963266 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:46 GMT
	I1101 01:01:46.963272 1266961 round_trippers.go:580]     Audit-Id: 251393be-01f0-416a-9b37-d39661882d55
	I1101 01:01:46.963278 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:46.963284 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:46.963290 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:46.963427 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:46.963826 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:47.460861 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:47.460884 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:47.460894 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:47.460903 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:47.463300 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:47.463319 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:47.463328 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:47.463336 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:47.463342 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:47.463348 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:47 GMT
	I1101 01:01:47.463354 1266961 round_trippers.go:580]     Audit-Id: 9be95925-58e5-47d1-9b6d-a8753653856a
	I1101 01:01:47.463360 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:47.463488 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:47.960496 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:47.960517 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:47.960526 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:47.960533 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:47.963054 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:47.963078 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:47.963087 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:47.963095 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:47.963102 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:47.963109 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:47 GMT
	I1101 01:01:47.963115 1266961 round_trippers.go:580]     Audit-Id: e07626be-25dc-4689-a371-374e08789ba6
	I1101 01:01:47.963124 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:47.963275 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:48.460300 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:48.460343 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:48.460354 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:48.460361 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:48.463003 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:48.463023 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:48.463031 1266961 round_trippers.go:580]     Audit-Id: dd16ef58-e93f-45af-abd4-c9a5a72ea9a8
	I1101 01:01:48.463038 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:48.463044 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:48.463050 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:48.463056 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:48.463062 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:48 GMT
	I1101 01:01:48.463260 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:48.960957 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:48.960978 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:48.961008 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:48.961016 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:48.963495 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:48.963515 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:48.963523 1266961 round_trippers.go:580]     Audit-Id: 8ea6858a-a4a5-4f17-a335-cf94804bac1d
	I1101 01:01:48.963530 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:48.963536 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:48.963542 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:48.963548 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:48.963557 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:48 GMT
	I1101 01:01:48.963663 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:48.964049 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:49.460864 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:49.460888 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:49.460899 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:49.460912 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:49.463427 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:49.463450 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:49.463460 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:49.463467 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:49.463474 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:49.463481 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:49.463490 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:49 GMT
	I1101 01:01:49.463500 1266961 round_trippers.go:580]     Audit-Id: cfb87fa7-2fbb-4ad0-a00c-6a51c5f3c9e2
	I1101 01:01:49.463670 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:49.960800 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:49.960825 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:49.960836 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:49.960844 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:49.963250 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:49.963271 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:49.963280 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:49.963287 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:49.963293 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:49 GMT
	I1101 01:01:49.963299 1266961 round_trippers.go:580]     Audit-Id: 6eec166c-ffa7-4016-8746-c2e0dc8e34de
	I1101 01:01:49.963305 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:49.963311 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:49.963410 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:50.460165 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:50.460191 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:50.460202 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:50.460209 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:50.462693 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:50.462715 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:50.462724 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:50 GMT
	I1101 01:01:50.462730 1266961 round_trippers.go:580]     Audit-Id: 0eb67006-eada-4dd3-9c5b-054a61647b55
	I1101 01:01:50.462737 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:50.462743 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:50.462750 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:50.462756 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:50.463041 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:50.960399 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:50.960422 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:50.960432 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:50.960439 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:50.962954 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:50.962977 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:50.962986 1266961 round_trippers.go:580]     Audit-Id: 522af9fa-ba67-426e-8c15-091d17ee2292
	I1101 01:01:50.962992 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:50.962998 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:50.963004 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:50.963011 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:50.963018 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:50 GMT
	I1101 01:01:50.963139 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:51.460249 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:51.460272 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:51.460283 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:51.460290 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:51.462847 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:51.462872 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:51.462881 1266961 round_trippers.go:580]     Audit-Id: 0a3a5ebd-1c95-481d-a332-697f650b96ea
	I1101 01:01:51.462888 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:51.462895 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:51.462901 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:51.462912 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:51.462921 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:51 GMT
	I1101 01:01:51.463119 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:51.463518 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:51.960198 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:51.960219 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:51.960230 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:51.960237 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:51.962654 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:51.962676 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:51.962684 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:51.962691 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:51.962698 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:51 GMT
	I1101 01:01:51.962704 1266961 round_trippers.go:580]     Audit-Id: d0865e71-8f6b-44e2-b267-4965e28954cf
	I1101 01:01:51.962719 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:51.962726 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:51.962923 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:52.460901 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:52.460924 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:52.460934 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:52.460941 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:52.463927 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:52.463948 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:52.463956 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:52.463964 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:52 GMT
	I1101 01:01:52.463970 1266961 round_trippers.go:580]     Audit-Id: 37ec48ed-9b94-4dec-98d8-129fd20346af
	I1101 01:01:52.463976 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:52.463983 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:52.463989 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:52.464129 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:52.960833 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:52.960859 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:52.960869 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:52.960876 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:52.963273 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:52.963298 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:52.963306 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:52.963315 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:52.963321 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:52 GMT
	I1101 01:01:52.963328 1266961 round_trippers.go:580]     Audit-Id: ef21a7b5-43d5-4b04-a146-8b1e347746a4
	I1101 01:01:52.963339 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:52.963353 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:52.963446 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:53.460348 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:53.460372 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:53.460382 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:53.460390 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:53.462846 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:53.462872 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:53.462880 1266961 round_trippers.go:580]     Audit-Id: 138157a8-87fd-44c4-a2a0-c67d58911b1c
	I1101 01:01:53.462887 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:53.462893 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:53.462899 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:53.462905 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:53.462916 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:53 GMT
	I1101 01:01:53.463016 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:53.961043 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:53.961067 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:53.961078 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:53.961085 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:53.963336 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:53.963360 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:53.963369 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:53.963376 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:53.963382 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:53.963388 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:53.963398 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:53 GMT
	I1101 01:01:53.963404 1266961 round_trippers.go:580]     Audit-Id: ac101ae8-239a-4a0d-a2bb-cc2a8a8c672f
	I1101 01:01:53.963497 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:53.963895 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:54.460567 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:54.460589 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:54.460599 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:54.460606 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:54.463096 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:54.463118 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:54.463126 1266961 round_trippers.go:580]     Audit-Id: 99895aaf-3f3e-4d0c-8703-4163c6fb1dc2
	I1101 01:01:54.463132 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:54.463138 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:54.463145 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:54.463151 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:54.463158 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:54 GMT
	I1101 01:01:54.463334 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:54.960184 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:54.960208 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:54.960218 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:54.960226 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:54.962702 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:54.962761 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:54.962771 1266961 round_trippers.go:580]     Audit-Id: 17799e65-78be-4f4c-8554-293b8deb2a0f
	I1101 01:01:54.962778 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:54.962784 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:54.962798 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:54.962813 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:54.962820 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:54 GMT
	I1101 01:01:54.962920 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:55.460405 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:55.460429 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:55.460439 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:55.460446 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:55.462974 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:55.463011 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:55.463019 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:55.463054 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:55.463061 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:55.463071 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:55 GMT
	I1101 01:01:55.463077 1266961 round_trippers.go:580]     Audit-Id: 883e22e6-2356-4c35-bef8-5a3329b5ee00
	I1101 01:01:55.463084 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:55.463216 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:55.960261 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:55.960283 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:55.960293 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:55.960300 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:55.962672 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:55.962694 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:55.962706 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:55 GMT
	I1101 01:01:55.962713 1266961 round_trippers.go:580]     Audit-Id: 0930ddb8-d6a1-48ba-8efc-e23eb2a72f86
	I1101 01:01:55.962719 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:55.962726 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:55.962732 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:55.962738 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:55.962847 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:56.460523 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:56.460546 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:56.460557 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:56.460564 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:56.463012 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:56.463069 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:56.463086 1266961 round_trippers.go:580]     Audit-Id: 04bd32fc-1f00-41cc-8807-982a4ad86275
	I1101 01:01:56.463096 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:56.463103 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:56.463109 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:56.463116 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:56.463125 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:56 GMT
	I1101 01:01:56.463256 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:56.463658 1266961 node_ready.go:58] node "multinode-291182" has status "Ready":"False"
	I1101 01:01:56.960398 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:56.960422 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:56.960432 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:56.960439 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:56.962922 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:56.962968 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:56.962976 1266961 round_trippers.go:580]     Audit-Id: 29bce1f7-6be9-4297-adab-1bf2e56e53dd
	I1101 01:01:56.962983 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:56.962989 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:56.962995 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:56.963001 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:56.963015 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:56 GMT
	I1101 01:01:56.963140 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:57.460896 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:57.460918 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:57.460929 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:57.460936 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:57.463355 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:57.463378 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:57.463388 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:57 GMT
	I1101 01:01:57.463394 1266961 round_trippers.go:580]     Audit-Id: 1f8ba00c-94ec-469e-9d5a-7b5496cfc762
	I1101 01:01:57.463401 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:57.463407 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:57.463414 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:57.463420 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:57.463770 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:57.960530 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:57.960553 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:57.960564 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:57.960571 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:57.963003 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:57.963040 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:57.963049 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:57.963055 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:57.963062 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:57.963068 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:57 GMT
	I1101 01:01:57.963074 1266961 round_trippers.go:580]     Audit-Id: 4eb9dfa6-eeb7-4d87-a0ac-73a2ed5b6a09
	I1101 01:01:57.963081 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:57.963289 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"360","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6230 chars]
	I1101 01:01:58.460510 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:58.460532 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.460542 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.460549 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.464449 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:01:58.464471 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.464481 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.464487 1266961 round_trippers.go:580]     Audit-Id: 13df8506-a012-4eda-94ec-cc51578b0a22
	I1101 01:01:58.464495 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.464513 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.464520 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.464526 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.464693 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:58.465099 1266961 node_ready.go:49] node "multinode-291182" has status "Ready":"True"
	I1101 01:01:58.465112 1266961 node_ready.go:38] duration metric: took 32.053125666s waiting for node "multinode-291182" to be "Ready" ...
	I1101 01:01:58.465122 1266961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:01:58.465194 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1101 01:01:58.465199 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.465207 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.465215 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.468850 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:01:58.468876 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.468884 1266961 round_trippers.go:580]     Audit-Id: a947bdb6-b130-41a9-8c72-d2f8663e577a
	I1101 01:01:58.468891 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.468898 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.468905 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.468911 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.468917 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.469346 1266961 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"430","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I1101 01:01:58.473415 1266961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-578kc" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:58.473500 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-578kc
	I1101 01:01:58.473511 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.473530 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.473542 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.476288 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:58.476309 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.476318 1266961 round_trippers.go:580]     Audit-Id: 65f572de-f736-405c-ae98-fd1af973ebfb
	I1101 01:01:58.476325 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.476331 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.476338 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.476350 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.476357 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.476667 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"430","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1101 01:01:58.477198 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:58.477215 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.477223 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.477230 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.479628 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:58.479648 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.479657 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.479664 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.479671 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.479677 1266961 round_trippers.go:580]     Audit-Id: 75d0a1ca-eac0-45ec-959f-61fc2825b3ee
	I1101 01:01:58.479687 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.479694 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.479884 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:58.480315 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-578kc
	I1101 01:01:58.480331 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.480339 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.480346 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.482615 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:58.482636 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.482644 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.482651 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.482657 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.482663 1266961 round_trippers.go:580]     Audit-Id: 7e08ebee-ab2c-4f8c-a637-983c8308801c
	I1101 01:01:58.482678 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.482687 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.483071 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"430","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1101 01:01:58.483606 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:58.483622 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.483631 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.483639 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.485771 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:58.485791 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.485799 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.485806 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.485812 1266961 round_trippers.go:580]     Audit-Id: 9c7e0ed7-26a6-499e-8a22-1d2a10399726
	I1101 01:01:58.485818 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.485828 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.485847 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.486306 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:58.987510 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-578kc
	I1101 01:01:58.987533 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.987543 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.987551 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.990529 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:58.990590 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.990614 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.990635 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.990672 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.990699 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.990721 1266961 round_trippers.go:580]     Audit-Id: 4c98a325-e554-48e8-9f1a-34afff4c737f
	I1101 01:01:58.990745 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.990877 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"441","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1101 01:01:58.991451 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:58.991468 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.991477 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.991483 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.996426 1266961 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 01:01:58.996458 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.996468 1266961 round_trippers.go:580]     Audit-Id: 656272c2-a898-4c99-8fa5-c10dc40ccc92
	I1101 01:01:58.996475 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:58.996481 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:58.996488 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:58.996499 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:58.996516 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:58.996835 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:58.997292 1266961 pod_ready.go:92] pod "coredns-5dd5756b68-578kc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:58.997312 1266961 pod_ready.go:81] duration metric: took 523.871417ms waiting for pod "coredns-5dd5756b68-578kc" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:58.997323 1266961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:58.997397 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-291182
	I1101 01:01:58.997420 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:58.997441 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:58.997457 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:58.999892 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:58.999960 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:58.999983 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.000008 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:58 GMT
	I1101 01:01:59.000045 1266961 round_trippers.go:580]     Audit-Id: dce6be7c-6657-43c6-889d-9ff31963cade
	I1101 01:01:59.000068 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.000085 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.000094 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.000244 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-291182","namespace":"kube-system","uid":"0a33ee34-33c0-4f59-9ae2-8ca35981deae","resourceVersion":"302","creationTimestamp":"2023-11-01T01:01:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8fc77d08e73561102406304e326b0ada","kubernetes.io/config.mirror":"8fc77d08e73561102406304e326b0ada","kubernetes.io/config.seen":"2023-11-01T01:01:14.618392791Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1101 01:01:59.000797 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:59.000817 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.000827 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.000834 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.003481 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.003504 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.003513 1266961 round_trippers.go:580]     Audit-Id: 85e177d2-ef73-424f-893b-731b66117a58
	I1101 01:01:59.003520 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.003526 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.003532 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.003538 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.003545 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.003788 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:59.004224 1266961 pod_ready.go:92] pod "etcd-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:59.004243 1266961 pod_ready.go:81] duration metric: took 6.913651ms waiting for pod "etcd-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.004258 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.004320 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-291182
	I1101 01:01:59.004330 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.004338 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.004346 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.006779 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.006797 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.006804 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.006810 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.006818 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.006824 1266961 round_trippers.go:580]     Audit-Id: a8e198fe-db8a-47cc-ab7b-efeecbe76b08
	I1101 01:01:59.006831 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.006837 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.006988 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-291182","namespace":"kube-system","uid":"da9644de-cf0b-493c-ad01-f81529c891f0","resourceVersion":"308","creationTimestamp":"2023-11-01T01:01:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6322aef4132b8d2d236e2e4a9c7d6c71","kubernetes.io/config.mirror":"6322aef4132b8d2d236e2e4a9c7d6c71","kubernetes.io/config.seen":"2023-11-01T01:01:14.618398510Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1101 01:01:59.007553 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:59.007560 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.007568 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.007574 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.009740 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.009757 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.009764 1266961 round_trippers.go:580]     Audit-Id: dcd70177-d96e-4305-b307-9f29b0265973
	I1101 01:01:59.009770 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.009776 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.009782 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.009787 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.009794 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.009921 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:59.010285 1266961 pod_ready.go:92] pod "kube-apiserver-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:59.010294 1266961 pod_ready.go:81] duration metric: took 6.029602ms waiting for pod "kube-apiserver-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.010303 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.060507 1266961 request.go:629] Waited for 50.133866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-291182
	I1101 01:01:59.060575 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-291182
	I1101 01:01:59.060584 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.060593 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.060600 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.063086 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.063107 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.063115 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.063122 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.063129 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.063135 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.063146 1266961 round_trippers.go:580]     Audit-Id: 42f5d3bf-acc3-4faf-8b71-be5c2c73eba2
	I1101 01:01:59.063153 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.063427 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-291182","namespace":"kube-system","uid":"46a662c3-7497-451d-a776-3070e248ea1f","resourceVersion":"309","creationTimestamp":"2023-11-01T01:01:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"815d9bd2feb7a98efe3748f3c66837bf","kubernetes.io/config.mirror":"815d9bd2feb7a98efe3748f3c66837bf","kubernetes.io/config.seen":"2023-11-01T01:01:06.872528900Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1101 01:01:59.260557 1266961 request.go:629] Waited for 196.591033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:59.260653 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:59.260664 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.260674 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.260681 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.263106 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.263139 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.263148 1266961 round_trippers.go:580]     Audit-Id: 47ec83a9-44e6-493a-914f-e3b62fa71052
	I1101 01:01:59.263154 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.263161 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.263167 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.263179 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.263185 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.263396 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:59.263787 1266961 pod_ready.go:92] pod "kube-controller-manager-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:59.263804 1266961 pod_ready.go:81] duration metric: took 253.493927ms waiting for pod "kube-controller-manager-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.263815 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-895f8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.461208 1266961 request.go:629] Waited for 197.311456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-895f8
	I1101 01:01:59.461288 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-895f8
	I1101 01:01:59.461298 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.461308 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.461316 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.463735 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.463787 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.463809 1266961 round_trippers.go:580]     Audit-Id: 5745b441-6936-4e63-a076-b7ab03d199d2
	I1101 01:01:59.463834 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.463869 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.463895 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.463918 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.463934 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.464079 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-895f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"e98c65c1-d3f2-424e-a05f-652d660bff7b","resourceVersion":"412","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"783f287e-71d3-45d2-84c3-165b969914ad","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"783f287e-71d3-45d2-84c3-165b969914ad\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1101 01:01:59.660895 1266961 request.go:629] Waited for 196.341139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:59.661010 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:01:59.661021 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.661031 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.661043 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.663371 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.663404 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.663413 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.663419 1266961 round_trippers.go:580]     Audit-Id: 17e161c8-9454-4022-ae36-502f6c883011
	I1101 01:01:59.663426 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.663436 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.663449 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.663456 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.663669 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:01:59.664058 1266961 pod_ready.go:92] pod "kube-proxy-895f8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:01:59.664076 1266961 pod_ready.go:81] duration metric: took 400.246675ms waiting for pod "kube-proxy-895f8" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.664087 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:01:59.861487 1266961 request.go:629] Waited for 197.328031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-291182
	I1101 01:01:59.861560 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-291182
	I1101 01:01:59.861571 1266961 round_trippers.go:469] Request Headers:
	I1101 01:01:59.861581 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:01:59.861588 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:01:59.864059 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:01:59.864082 1266961 round_trippers.go:577] Response Headers:
	I1101 01:01:59.864091 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:01:59.864098 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:01:59.864104 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:01:59.864110 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:01:59 GMT
	I1101 01:01:59.864122 1266961 round_trippers.go:580]     Audit-Id: 925e5148-6a1b-41ce-ab32-c53b6ace6cda
	I1101 01:01:59.864130 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:01:59.864373 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-291182","namespace":"kube-system","uid":"713ae672-bf7e-4ea7-993e-cf425aa2e548","resourceVersion":"304","creationTimestamp":"2023-11-01T01:01:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92e0b369f3b6f7205d52c0c90e29d288","kubernetes.io/config.mirror":"92e0b369f3b6f7205d52c0c90e29d288","kubernetes.io/config.seen":"2023-11-01T01:01:14.618400766Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1101 01:02:00.061286 1266961 request.go:629] Waited for 196.449201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:00.061386 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:00.061423 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:00.061437 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:00.061454 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:00.064571 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:02:00.064649 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:00.064690 1266961 round_trippers.go:580]     Audit-Id: 3102695e-a124-4971-9eed-cd3145120ae1
	I1101 01:02:00.064739 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:00.064776 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:00.064804 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:00.064819 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:00.064826 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:00 GMT
	I1101 01:02:00.064967 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:00.065458 1266961 pod_ready.go:92] pod "kube-scheduler-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:00.065479 1266961 pod_ready.go:81] duration metric: took 401.385032ms waiting for pod "kube-scheduler-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:00.065494 1266961 pod_ready.go:38] duration metric: took 1.600354163s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:02:00.065513 1266961 api_server.go:52] waiting for apiserver process to appear ...
	I1101 01:02:00.065600 1266961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:02:00.080783 1266961 command_runner.go:130] > 1253
	I1101 01:02:00.082830 1266961 api_server.go:72] duration metric: took 33.729953731s to wait for apiserver process to appear ...
	I1101 01:02:00.082862 1266961 api_server.go:88] waiting for apiserver healthz status ...
	I1101 01:02:00.082884 1266961 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1101 01:02:00.092253 1266961 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1101 01:02:00.092384 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1101 01:02:00.092398 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:00.092408 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:00.092423 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:00.093917 1266961 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 01:02:00.093941 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:00.093950 1266961 round_trippers.go:580]     Content-Length: 264
	I1101 01:02:00.093960 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:00 GMT
	I1101 01:02:00.093966 1266961 round_trippers.go:580]     Audit-Id: eab013a4-2ecb-458a-b114-a1fe6f2c1a9d
	I1101 01:02:00.093972 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:00.093979 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:00.093985 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:00.093992 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:00.094194 1266961 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1101 01:02:00.094344 1266961 api_server.go:141] control plane version: v1.28.3
	I1101 01:02:00.094368 1266961 api_server.go:131] duration metric: took 11.49764ms to wait for apiserver health ...
	I1101 01:02:00.094377 1266961 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 01:02:00.260794 1266961 request.go:629] Waited for 166.32093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1101 01:02:00.260857 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1101 01:02:00.260864 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:00.260880 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:00.260887 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:00.264457 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:02:00.264529 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:00.264553 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:00.264576 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:00.264609 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:00.264634 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:00.264658 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:00 GMT
	I1101 01:02:00.264681 1266961 round_trippers.go:580]     Audit-Id: 411d9b11-fa54-4d66-87b8-a0de9fdbda99
	I1101 01:02:00.265204 1266961 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"441","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1101 01:02:00.267639 1266961 system_pods.go:59] 8 kube-system pods found
	I1101 01:02:00.267667 1266961 system_pods.go:61] "coredns-5dd5756b68-578kc" [2f19e5cb-4b75-4e3e-a19b-280990e84437] Running
	I1101 01:02:00.267674 1266961 system_pods.go:61] "etcd-multinode-291182" [0a33ee34-33c0-4f59-9ae2-8ca35981deae] Running
	I1101 01:02:00.267681 1266961 system_pods.go:61] "kindnet-rlzpj" [66913683-459b-404f-b453-48bccb6ebbdb] Running
	I1101 01:02:00.267686 1266961 system_pods.go:61] "kube-apiserver-multinode-291182" [da9644de-cf0b-493c-ad01-f81529c891f0] Running
	I1101 01:02:00.267698 1266961 system_pods.go:61] "kube-controller-manager-multinode-291182" [46a662c3-7497-451d-a776-3070e248ea1f] Running
	I1101 01:02:00.267703 1266961 system_pods.go:61] "kube-proxy-895f8" [e98c65c1-d3f2-424e-a05f-652d660bff7b] Running
	I1101 01:02:00.267712 1266961 system_pods.go:61] "kube-scheduler-multinode-291182" [713ae672-bf7e-4ea7-993e-cf425aa2e548] Running
	I1101 01:02:00.267718 1266961 system_pods.go:61] "storage-provisioner" [194ac2e0-8f59-49fb-9ede-086271776161] Running
	I1101 01:02:00.267727 1266961 system_pods.go:74] duration metric: took 173.320554ms to wait for pod list to return data ...
	I1101 01:02:00.267735 1266961 default_sa.go:34] waiting for default service account to be created ...
	I1101 01:02:00.461125 1266961 request.go:629] Waited for 193.320989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1101 01:02:00.461182 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1101 01:02:00.461188 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:00.461197 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:00.461210 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:00.463614 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:00.463639 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:00.463648 1266961 round_trippers.go:580]     Audit-Id: 2a5a6f7a-b1dd-406b-8433-a9f41ea8e5bd
	I1101 01:02:00.463655 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:00.463666 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:00.463672 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:00.463679 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:00.463687 1266961 round_trippers.go:580]     Content-Length: 261
	I1101 01:02:00.463694 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:00 GMT
	I1101 01:02:00.463718 1266961 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"731819c1-93e8-42b3-bd2c-21d75ca5da7a","resourceVersion":"332","creationTimestamp":"2023-11-01T01:01:26Z"}}]}
	I1101 01:02:00.463928 1266961 default_sa.go:45] found service account: "default"
	I1101 01:02:00.463945 1266961 default_sa.go:55] duration metric: took 196.203425ms for default service account to be created ...
	I1101 01:02:00.463953 1266961 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 01:02:00.661340 1266961 request.go:629] Waited for 197.325142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1101 01:02:00.661395 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1101 01:02:00.661402 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:00.661415 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:00.661430 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:00.664687 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:02:00.664721 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:00.664730 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:00.664737 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:00.664743 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:00.664749 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:00.664758 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:00 GMT
	I1101 01:02:00.664770 1266961 round_trippers.go:580]     Audit-Id: d38645e7-f857-4c0d-b384-7c3a105bdcd4
	I1101 01:02:00.665235 1266961 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"441","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1101 01:02:00.667604 1266961 system_pods.go:86] 8 kube-system pods found
	I1101 01:02:00.667628 1266961 system_pods.go:89] "coredns-5dd5756b68-578kc" [2f19e5cb-4b75-4e3e-a19b-280990e84437] Running
	I1101 01:02:00.667636 1266961 system_pods.go:89] "etcd-multinode-291182" [0a33ee34-33c0-4f59-9ae2-8ca35981deae] Running
	I1101 01:02:00.667641 1266961 system_pods.go:89] "kindnet-rlzpj" [66913683-459b-404f-b453-48bccb6ebbdb] Running
	I1101 01:02:00.667647 1266961 system_pods.go:89] "kube-apiserver-multinode-291182" [da9644de-cf0b-493c-ad01-f81529c891f0] Running
	I1101 01:02:00.667653 1266961 system_pods.go:89] "kube-controller-manager-multinode-291182" [46a662c3-7497-451d-a776-3070e248ea1f] Running
	I1101 01:02:00.667662 1266961 system_pods.go:89] "kube-proxy-895f8" [e98c65c1-d3f2-424e-a05f-652d660bff7b] Running
	I1101 01:02:00.667667 1266961 system_pods.go:89] "kube-scheduler-multinode-291182" [713ae672-bf7e-4ea7-993e-cf425aa2e548] Running
	I1101 01:02:00.667671 1266961 system_pods.go:89] "storage-provisioner" [194ac2e0-8f59-49fb-9ede-086271776161] Running
	I1101 01:02:00.667677 1266961 system_pods.go:126] duration metric: took 203.719657ms to wait for k8s-apps to be running ...
	I1101 01:02:00.667684 1266961 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:02:00.667740 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:02:00.682435 1266961 system_svc.go:56] duration metric: took 14.739279ms WaitForService to wait for kubelet.
	I1101 01:02:00.682506 1266961 kubeadm.go:581] duration metric: took 34.329651172s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:02:00.682541 1266961 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:02:00.860858 1266961 request.go:629] Waited for 178.207705ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1101 01:02:00.860941 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1101 01:02:00.860956 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:00.860966 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:00.860976 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:00.863533 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:00.863584 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:00.863606 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:00 GMT
	I1101 01:02:00.863630 1266961 round_trippers.go:580]     Audit-Id: 3682f11f-d52a-4b15-8e0c-e38dcfe99151
	I1101 01:02:00.863667 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:00.863692 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:00.863713 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:00.863746 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:00.863929 1266961 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 6089 chars]
	I1101 01:02:00.864393 1266961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 01:02:00.864418 1266961 node_conditions.go:123] node cpu capacity is 2
	I1101 01:02:00.864429 1266961 node_conditions.go:105] duration metric: took 181.871159ms to run NodePressure ...
	I1101 01:02:00.864441 1266961 start.go:228] waiting for startup goroutines ...
	I1101 01:02:00.864447 1266961 start.go:233] waiting for cluster config update ...
	I1101 01:02:00.864465 1266961 start.go:242] writing updated cluster config ...
	I1101 01:02:00.867229 1266961 out.go:177] 
	I1101 01:02:00.869309 1266961 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:02:00.869409 1266961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/config.json ...
	I1101 01:02:00.871570 1266961 out.go:177] * Starting worker node multinode-291182-m02 in cluster multinode-291182
	I1101 01:02:00.873698 1266961 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 01:02:00.875645 1266961 out.go:177] * Pulling base image ...
	I1101 01:02:00.878261 1266961 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:02:00.878294 1266961 cache.go:56] Caching tarball of preloaded images
	I1101 01:02:00.878340 1266961 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 01:02:00.878383 1266961 preload.go:174] Found /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1101 01:02:00.878393 1266961 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1101 01:02:00.878493 1266961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/config.json ...
	I1101 01:02:00.905224 1266961 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon, skipping pull
	I1101 01:02:00.905248 1266961 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 exists in daemon, skipping load
	I1101 01:02:00.905266 1266961 cache.go:194] Successfully downloaded all kic artifacts
	I1101 01:02:00.905295 1266961 start.go:365] acquiring machines lock for multinode-291182-m02: {Name:mk983c212c92839cb15f16e44ec741453af3bcd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:02:00.905408 1266961 start.go:369] acquired machines lock for "multinode-291182-m02" in 94.851µs
	I1101 01:02:00.905435 1266961 start.go:93] Provisioning new machine with config: &{Name:multinode-291182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 01:02:00.905513 1266961 start.go:125] createHost starting for "m02" (driver="docker")
	I1101 01:02:00.907894 1266961 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1101 01:02:00.907989 1266961 start.go:159] libmachine.API.Create for "multinode-291182" (driver="docker")
	I1101 01:02:00.908004 1266961 client.go:168] LocalClient.Create starting
	I1101 01:02:00.908057 1266961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem
	I1101 01:02:00.908086 1266961 main.go:141] libmachine: Decoding PEM data...
	I1101 01:02:00.908100 1266961 main.go:141] libmachine: Parsing certificate...
	I1101 01:02:00.908154 1266961 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem
	I1101 01:02:00.908171 1266961 main.go:141] libmachine: Decoding PEM data...
	I1101 01:02:00.908181 1266961 main.go:141] libmachine: Parsing certificate...
	I1101 01:02:00.908414 1266961 cli_runner.go:164] Run: docker network inspect multinode-291182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 01:02:00.927136 1266961 network_create.go:77] Found existing network {name:multinode-291182 subnet:0x40029be0f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1101 01:02:00.927172 1266961 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-291182-m02" container
	I1101 01:02:00.927251 1266961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 01:02:00.944659 1266961 cli_runner.go:164] Run: docker volume create multinode-291182-m02 --label name.minikube.sigs.k8s.io=multinode-291182-m02 --label created_by.minikube.sigs.k8s.io=true
	I1101 01:02:00.964465 1266961 oci.go:103] Successfully created a docker volume multinode-291182-m02
	I1101 01:02:00.964557 1266961 cli_runner.go:164] Run: docker run --rm --name multinode-291182-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-291182-m02 --entrypoint /usr/bin/test -v multinode-291182-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -d /var/lib
	I1101 01:02:01.563396 1266961 oci.go:107] Successfully prepared a docker volume multinode-291182-m02
	I1101 01:02:01.563437 1266961 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 01:02:01.563457 1266961 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 01:02:01.563549 1266961 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-291182-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 01:02:05.978033 1266961 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-291182-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 -I lz4 -xf /preloaded.tar -C /extractDir: (4.414432702s)
	I1101 01:02:05.978066 1266961 kic.go:203] duration metric: took 4.414605 seconds to extract preloaded images to volume
	W1101 01:02:05.978220 1266961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 01:02:05.978336 1266961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 01:02:06.054725 1266961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-291182-m02 --name multinode-291182-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-291182-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-291182-m02 --network multinode-291182 --ip 192.168.58.3 --volume multinode-291182-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458
	I1101 01:02:06.424017 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182-m02 --format={{.State.Running}}
	I1101 01:02:06.445166 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182-m02 --format={{.State.Status}}
	I1101 01:02:06.466871 1266961 cli_runner.go:164] Run: docker exec multinode-291182-m02 stat /var/lib/dpkg/alternatives/iptables
	I1101 01:02:06.542298 1266961 oci.go:144] the created container "multinode-291182-m02" has a running status.
	I1101 01:02:06.542326 1266961 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa...
	I1101 01:02:07.258149 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1101 01:02:07.258247 1266961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 01:02:07.292280 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182-m02 --format={{.State.Status}}
	I1101 01:02:07.330536 1266961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 01:02:07.330556 1266961 kic_runner.go:114] Args: [docker exec --privileged multinode-291182-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 01:02:07.433228 1266961 cli_runner.go:164] Run: docker container inspect multinode-291182-m02 --format={{.State.Status}}
	I1101 01:02:07.456671 1266961 machine.go:88] provisioning docker machine ...
	I1101 01:02:07.461313 1266961 ubuntu.go:169] provisioning hostname "multinode-291182-m02"
	I1101 01:02:07.461389 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:07.490372 1266961 main.go:141] libmachine: Using SSH client type: native
	I1101 01:02:07.490910 1266961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34372 <nil> <nil>}
	I1101 01:02:07.490928 1266961 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-291182-m02 && echo "multinode-291182-m02" | sudo tee /etc/hostname
	I1101 01:02:07.668082 1266961 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-291182-m02
	
	I1101 01:02:07.668232 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:07.703482 1266961 main.go:141] libmachine: Using SSH client type: native
	I1101 01:02:07.703895 1266961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34372 <nil> <nil>}
	I1101 01:02:07.703913 1266961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-291182-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-291182-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-291182-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:02:07.850232 1266961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:02:07.850257 1266961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 01:02:07.850272 1266961 ubuntu.go:177] setting up certificates
	I1101 01:02:07.850281 1266961 provision.go:83] configureAuth start
	I1101 01:02:07.850343 1266961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182-m02
	I1101 01:02:07.871837 1266961 provision.go:138] copyHostCerts
	I1101 01:02:07.871880 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:02:07.871911 1266961 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 01:02:07.871922 1266961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:02:07.872001 1266961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 01:02:07.872079 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:02:07.872101 1266961 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 01:02:07.872106 1266961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:02:07.872132 1266961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 01:02:07.872190 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:02:07.872210 1266961 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 01:02:07.872215 1266961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:02:07.872239 1266961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 01:02:07.872286 1266961 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.multinode-291182-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-291182-m02]
	I1101 01:02:08.394695 1266961 provision.go:172] copyRemoteCerts
	I1101 01:02:08.394813 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:02:08.394863 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:08.412770 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34372 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa Username:docker}
	I1101 01:02:08.521003 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1101 01:02:08.521062 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1101 01:02:08.552606 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1101 01:02:08.552675 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:02:08.584584 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1101 01:02:08.584649 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 01:02:08.614666 1266961 provision.go:86] duration metric: configureAuth took 764.357284ms
	I1101 01:02:08.614735 1266961 ubuntu.go:193] setting minikube options for container-runtime
	I1101 01:02:08.614948 1266961 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:02:08.615086 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:08.633668 1266961 main.go:141] libmachine: Using SSH client type: native
	I1101 01:02:08.634079 1266961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34372 <nil> <nil>}
	I1101 01:02:08.634104 1266961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:02:08.906955 1266961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:02:08.907026 1266961 machine.go:91] provisioned docker machine in 1.445736167s
	I1101 01:02:08.907049 1266961 client.go:171] LocalClient.Create took 7.999039298s
	I1101 01:02:08.907100 1266961 start.go:167] duration metric: libmachine.API.Create for "multinode-291182" took 7.999093354s
	I1101 01:02:08.907126 1266961 start.go:300] post-start starting for "multinode-291182-m02" (driver="docker")
	I1101 01:02:08.907151 1266961 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:02:08.907285 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:02:08.907368 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:08.926575 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34372 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa Username:docker}
	I1101 01:02:09.028890 1266961 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:02:09.033189 1266961 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1101 01:02:09.033209 1266961 command_runner.go:130] > NAME="Ubuntu"
	I1101 01:02:09.033216 1266961 command_runner.go:130] > VERSION_ID="22.04"
	I1101 01:02:09.033223 1266961 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1101 01:02:09.033232 1266961 command_runner.go:130] > VERSION_CODENAME=jammy
	I1101 01:02:09.033237 1266961 command_runner.go:130] > ID=ubuntu
	I1101 01:02:09.033246 1266961 command_runner.go:130] > ID_LIKE=debian
	I1101 01:02:09.033252 1266961 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1101 01:02:09.033263 1266961 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1101 01:02:09.033274 1266961 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1101 01:02:09.033282 1266961 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1101 01:02:09.033290 1266961 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1101 01:02:09.033949 1266961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 01:02:09.033984 1266961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 01:02:09.034003 1266961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 01:02:09.034017 1266961 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1101 01:02:09.034028 1266961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 01:02:09.034098 1266961 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 01:02:09.034199 1266961 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 01:02:09.034212 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /etc/ssl/certs/12028972.pem
	I1101 01:02:09.034320 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:02:09.045782 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:02:09.074947 1266961 start.go:303] post-start completed in 167.792822ms
	I1101 01:02:09.075300 1266961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182-m02
	I1101 01:02:09.094851 1266961 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/config.json ...
	I1101 01:02:09.095132 1266961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:02:09.095190 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:09.113627 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34372 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa Username:docker}
	I1101 01:02:09.210788 1266961 command_runner.go:130] > 11%!
	(MISSING)I1101 01:02:09.210880 1266961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 01:02:09.215999 1266961 command_runner.go:130] > 173G
	I1101 01:02:09.216383 1266961 start.go:128] duration metric: createHost completed in 8.310859654s
	I1101 01:02:09.216398 1266961 start.go:83] releasing machines lock for "multinode-291182-m02", held for 8.310982698s
	I1101 01:02:09.216470 1266961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182-m02
	I1101 01:02:09.241745 1266961 out.go:177] * Found network options:
	I1101 01:02:09.243625 1266961 out.go:177]   - NO_PROXY=192.168.58.2
	W1101 01:02:09.245496 1266961 proxy.go:119] fail to check proxy env: Error ip not in block
	W1101 01:02:09.245545 1266961 proxy.go:119] fail to check proxy env: Error ip not in block
	I1101 01:02:09.245661 1266961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:02:09.245713 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:09.245794 1266961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:02:09.245849 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:02:09.266455 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34372 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa Username:docker}
	I1101 01:02:09.281151 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34372 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa Username:docker}
	I1101 01:02:09.516637 1266961 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1101 01:02:09.543977 1266961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 01:02:09.549327 1266961 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1101 01:02:09.549352 1266961 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1101 01:02:09.549361 1266961 command_runner.go:130] > Device: b3h/179d	Inode: 1823288     Links: 1
	I1101 01:02:09.549369 1266961 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 01:02:09.549376 1266961 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1101 01:02:09.549382 1266961 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1101 01:02:09.549388 1266961 command_runner.go:130] > Change: 2023-11-01 00:32:33.104025601 +0000
	I1101 01:02:09.549403 1266961 command_runner.go:130] >  Birth: 2023-11-01 00:32:33.104025601 +0000
	I1101 01:02:09.549777 1266961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:02:09.575993 1266961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 01:02:09.576070 1266961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:02:09.613108 1266961 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1101 01:02:09.613143 1266961 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1101 01:02:09.613151 1266961 start.go:472] detecting cgroup driver to use...
	I1101 01:02:09.613181 1266961 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 01:02:09.613249 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:02:09.633401 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:02:09.647579 1266961 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:02:09.647656 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:02:09.666081 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:02:09.686933 1266961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 01:02:09.802817 1266961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:02:09.820866 1266961 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1101 01:02:09.910289 1266961 docker.go:220] disabling docker service ...
	I1101 01:02:09.910359 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:02:09.932877 1266961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:02:09.949445 1266961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:02:10.054218 1266961 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1101 01:02:10.054360 1266961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:02:10.158530 1266961 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1101 01:02:10.158610 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:02:10.172418 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:02:10.191166 1266961 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1101 01:02:10.192208 1266961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 01:02:10.192295 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:02:10.204565 1266961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 01:02:10.204679 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:02:10.217040 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:02:10.230018 1266961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:02:10.242355 1266961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 01:02:10.253844 1266961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 01:02:10.263140 1266961 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1101 01:02:10.264304 1266961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 01:02:10.275268 1266961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 01:02:10.365600 1266961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 01:02:10.470013 1266961 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 01:02:10.470084 1266961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 01:02:10.474822 1266961 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1101 01:02:10.474891 1266961 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1101 01:02:10.474913 1266961 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1101 01:02:10.474935 1266961 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 01:02:10.474968 1266961 command_runner.go:130] > Access: 2023-11-01 01:02:10.453189059 +0000
	I1101 01:02:10.474998 1266961 command_runner.go:130] > Modify: 2023-11-01 01:02:10.453189059 +0000
	I1101 01:02:10.475020 1266961 command_runner.go:130] > Change: 2023-11-01 01:02:10.453189059 +0000
	I1101 01:02:10.475053 1266961 command_runner.go:130] >  Birth: -
	I1101 01:02:10.475357 1266961 start.go:540] Will wait 60s for crictl version
	I1101 01:02:10.475437 1266961 ssh_runner.go:195] Run: which crictl
	I1101 01:02:10.479579 1266961 command_runner.go:130] > /usr/bin/crictl
	I1101 01:02:10.480094 1266961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 01:02:10.525873 1266961 command_runner.go:130] > Version:  0.1.0
	I1101 01:02:10.526205 1266961 command_runner.go:130] > RuntimeName:  cri-o
	I1101 01:02:10.526504 1266961 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1101 01:02:10.526777 1266961 command_runner.go:130] > RuntimeApiVersion:  v1
	I1101 01:02:10.529933 1266961 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1101 01:02:10.530062 1266961 ssh_runner.go:195] Run: crio --version
	I1101 01:02:10.582111 1266961 command_runner.go:130] > crio version 1.24.6
	I1101 01:02:10.582135 1266961 command_runner.go:130] > Version:          1.24.6
	I1101 01:02:10.582147 1266961 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1101 01:02:10.582153 1266961 command_runner.go:130] > GitTreeState:     clean
	I1101 01:02:10.582160 1266961 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1101 01:02:10.582198 1266961 command_runner.go:130] > GoVersion:        go1.18.2
	I1101 01:02:10.582241 1266961 command_runner.go:130] > Compiler:         gc
	I1101 01:02:10.582249 1266961 command_runner.go:130] > Platform:         linux/arm64
	I1101 01:02:10.582271 1266961 command_runner.go:130] > Linkmode:         dynamic
	I1101 01:02:10.582288 1266961 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 01:02:10.582294 1266961 command_runner.go:130] > SeccompEnabled:   true
	I1101 01:02:10.582307 1266961 command_runner.go:130] > AppArmorEnabled:  false
	I1101 01:02:10.582404 1266961 ssh_runner.go:195] Run: crio --version
	I1101 01:02:10.630395 1266961 command_runner.go:130] > crio version 1.24.6
	I1101 01:02:10.630417 1266961 command_runner.go:130] > Version:          1.24.6
	I1101 01:02:10.630427 1266961 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1101 01:02:10.630432 1266961 command_runner.go:130] > GitTreeState:     clean
	I1101 01:02:10.630440 1266961 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1101 01:02:10.630464 1266961 command_runner.go:130] > GoVersion:        go1.18.2
	I1101 01:02:10.630480 1266961 command_runner.go:130] > Compiler:         gc
	I1101 01:02:10.630486 1266961 command_runner.go:130] > Platform:         linux/arm64
	I1101 01:02:10.630497 1266961 command_runner.go:130] > Linkmode:         dynamic
	I1101 01:02:10.630507 1266961 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1101 01:02:10.630516 1266961 command_runner.go:130] > SeccompEnabled:   true
	I1101 01:02:10.630521 1266961 command_runner.go:130] > AppArmorEnabled:  false
	I1101 01:02:10.632748 1266961 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1101 01:02:10.634748 1266961 out.go:177]   - env NO_PROXY=192.168.58.2
	I1101 01:02:10.636493 1266961 cli_runner.go:164] Run: docker network inspect multinode-291182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 01:02:10.654347 1266961 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1101 01:02:10.659172 1266961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:02:10.674152 1266961 certs.go:56] Setting up /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182 for IP: 192.168.58.3
	I1101 01:02:10.674193 1266961 certs.go:190] acquiring lock for shared ca certs: {Name:mk19a54d78f5cf4996fdfc5da5ee5226ef1f844f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 01:02:10.674360 1266961 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key
	I1101 01:02:10.674408 1266961 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key
	I1101 01:02:10.674424 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1101 01:02:10.674438 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1101 01:02:10.674455 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1101 01:02:10.674466 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1101 01:02:10.674531 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem (1338 bytes)
	W1101 01:02:10.674566 1266961 certs.go:433] ignoring /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897_empty.pem, impossibly tiny 0 bytes
	I1101 01:02:10.674579 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 01:02:10.674614 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem (1082 bytes)
	I1101 01:02:10.674643 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem (1123 bytes)
	I1101 01:02:10.674672 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem (1675 bytes)
	I1101 01:02:10.674729 1266961 certs.go:437] found cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:02:10.674770 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:02:10.674785 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem -> /usr/share/ca-certificates/1202897.pem
	I1101 01:02:10.674796 1266961 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> /usr/share/ca-certificates/12028972.pem
	I1101 01:02:10.675219 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 01:02:10.706877 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 01:02:10.735364 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 01:02:10.763286 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 01:02:10.794481 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 01:02:10.823259 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/1202897.pem --> /usr/share/ca-certificates/1202897.pem (1338 bytes)
	I1101 01:02:10.850840 1266961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /usr/share/ca-certificates/12028972.pem (1708 bytes)
	I1101 01:02:10.878854 1266961 ssh_runner.go:195] Run: openssl version
	I1101 01:02:10.885424 1266961 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1101 01:02:10.885751 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12028972.pem && ln -fs /usr/share/ca-certificates/12028972.pem /etc/ssl/certs/12028972.pem"
	I1101 01:02:10.897007 1266961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12028972.pem
	I1101 01:02:10.901469 1266961 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  1 00:39 /usr/share/ca-certificates/12028972.pem
	I1101 01:02:10.901496 1266961 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  1 00:39 /usr/share/ca-certificates/12028972.pem
	I1101 01:02:10.901581 1266961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12028972.pem
	I1101 01:02:10.909835 1266961 command_runner.go:130] > 3ec20f2e
	I1101 01:02:10.910235 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12028972.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 01:02:10.921490 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 01:02:10.932617 1266961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:02:10.936865 1266961 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:02:10.937075 1266961 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  1 00:33 /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:02:10.937128 1266961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 01:02:10.945076 1266961 command_runner.go:130] > b5213941
	I1101 01:02:10.945679 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 01:02:10.957172 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1202897.pem && ln -fs /usr/share/ca-certificates/1202897.pem /etc/ssl/certs/1202897.pem"
	I1101 01:02:10.968290 1266961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1202897.pem
	I1101 01:02:10.972856 1266961 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  1 00:39 /usr/share/ca-certificates/1202897.pem
	I1101 01:02:10.972888 1266961 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  1 00:39 /usr/share/ca-certificates/1202897.pem
	I1101 01:02:10.972940 1266961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1202897.pem
	I1101 01:02:10.981153 1266961 command_runner.go:130] > 51391683
	I1101 01:02:10.981582 1266961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1202897.pem /etc/ssl/certs/51391683.0"
	I1101 01:02:10.992591 1266961 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1101 01:02:10.996656 1266961 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:02:10.996712 1266961 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1101 01:02:10.996822 1266961 ssh_runner.go:195] Run: crio config
	I1101 01:02:11.049088 1266961 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1101 01:02:11.049115 1266961 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1101 01:02:11.049124 1266961 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1101 01:02:11.049128 1266961 command_runner.go:130] > #
	I1101 01:02:11.049164 1266961 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1101 01:02:11.049179 1266961 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1101 01:02:11.049188 1266961 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1101 01:02:11.049204 1266961 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1101 01:02:11.049209 1266961 command_runner.go:130] > # reload'.
	I1101 01:02:11.049241 1266961 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1101 01:02:11.049255 1266961 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1101 01:02:11.049263 1266961 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1101 01:02:11.049274 1266961 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1101 01:02:11.049278 1266961 command_runner.go:130] > [crio]
	I1101 01:02:11.049286 1266961 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1101 01:02:11.049307 1266961 command_runner.go:130] > # containers images, in this directory.
	I1101 01:02:11.049322 1266961 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1101 01:02:11.049331 1266961 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1101 01:02:11.049566 1266961 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1101 01:02:11.049580 1266961 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1101 01:02:11.049611 1266961 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1101 01:02:11.049624 1266961 command_runner.go:130] > # storage_driver = "vfs"
	I1101 01:02:11.049632 1266961 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1101 01:02:11.049643 1266961 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1101 01:02:11.049649 1266961 command_runner.go:130] > # storage_option = [
	I1101 01:02:11.049877 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.049916 1266961 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1101 01:02:11.049931 1266961 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1101 01:02:11.049937 1266961 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1101 01:02:11.049949 1266961 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1101 01:02:11.049957 1266961 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1101 01:02:11.049966 1266961 command_runner.go:130] > # always happen on a node reboot
	I1101 01:02:11.049988 1266961 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1101 01:02:11.050002 1266961 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1101 01:02:11.050020 1266961 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1101 01:02:11.050043 1266961 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1101 01:02:11.050072 1266961 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1101 01:02:11.050093 1266961 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1101 01:02:11.050110 1266961 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1101 01:02:11.050116 1266961 command_runner.go:130] > # internal_wipe = true
	I1101 01:02:11.050140 1266961 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1101 01:02:11.050154 1266961 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1101 01:02:11.050162 1266961 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1101 01:02:11.050172 1266961 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1101 01:02:11.050206 1266961 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1101 01:02:11.050220 1266961 command_runner.go:130] > [crio.api]
	I1101 01:02:11.050250 1266961 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1101 01:02:11.050281 1266961 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1101 01:02:11.050296 1266961 command_runner.go:130] > # IP address on which the stream server will listen.
	I1101 01:02:11.050302 1266961 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1101 01:02:11.050311 1266961 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1101 01:02:11.050321 1266961 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1101 01:02:11.050327 1266961 command_runner.go:130] > # stream_port = "0"
	I1101 01:02:11.050334 1266961 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1101 01:02:11.050356 1266961 command_runner.go:130] > # stream_enable_tls = false
	I1101 01:02:11.050371 1266961 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1101 01:02:11.050377 1266961 command_runner.go:130] > # stream_idle_timeout = ""
	I1101 01:02:11.050390 1266961 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1101 01:02:11.050399 1266961 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1101 01:02:11.050407 1266961 command_runner.go:130] > # minutes.
	I1101 01:02:11.050412 1266961 command_runner.go:130] > # stream_tls_cert = ""
	I1101 01:02:11.050433 1266961 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1101 01:02:11.050448 1266961 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1101 01:02:11.050454 1266961 command_runner.go:130] > # stream_tls_key = ""
	I1101 01:02:11.050471 1266961 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1101 01:02:11.050486 1266961 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1101 01:02:11.050505 1266961 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1101 01:02:11.050516 1266961 command_runner.go:130] > # stream_tls_ca = ""
	I1101 01:02:11.050535 1266961 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 01:02:11.050548 1266961 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1101 01:02:11.050557 1266961 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1101 01:02:11.050567 1266961 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1101 01:02:11.050662 1266961 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1101 01:02:11.050676 1266961 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1101 01:02:11.050682 1266961 command_runner.go:130] > [crio.runtime]
	I1101 01:02:11.050700 1266961 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1101 01:02:11.050714 1266961 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1101 01:02:11.050720 1266961 command_runner.go:130] > # "nofile=1024:2048"
	I1101 01:02:11.050753 1266961 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1101 01:02:11.050765 1266961 command_runner.go:130] > # default_ulimits = [
	I1101 01:02:11.050770 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.050778 1266961 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1101 01:02:11.050786 1266961 command_runner.go:130] > # no_pivot = false
	I1101 01:02:11.050793 1266961 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1101 01:02:11.050818 1266961 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1101 01:02:11.050830 1266961 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1101 01:02:11.050847 1266961 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1101 01:02:11.050861 1266961 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1101 01:02:11.050870 1266961 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 01:02:11.050890 1266961 command_runner.go:130] > # conmon = ""
	I1101 01:02:11.050902 1266961 command_runner.go:130] > # Cgroup setting for conmon
	I1101 01:02:11.050911 1266961 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1101 01:02:11.050922 1266961 command_runner.go:130] > conmon_cgroup = "pod"
	I1101 01:02:11.050930 1266961 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1101 01:02:11.050941 1266961 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1101 01:02:11.050950 1266961 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1101 01:02:11.050969 1266961 command_runner.go:130] > # conmon_env = [
	I1101 01:02:11.050979 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.050987 1266961 command_runner.go:130] > # Additional environment variables to set for all the
	I1101 01:02:11.050993 1266961 command_runner.go:130] > # containers. These are overridden if set in the
	I1101 01:02:11.051006 1266961 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1101 01:02:11.051012 1266961 command_runner.go:130] > # default_env = [
	I1101 01:02:11.051020 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.051027 1266961 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1101 01:02:11.051048 1266961 command_runner.go:130] > # selinux = false
	I1101 01:02:11.051063 1266961 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1101 01:02:11.051071 1266961 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1101 01:02:11.051082 1266961 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1101 01:02:11.051087 1266961 command_runner.go:130] > # seccomp_profile = ""
	I1101 01:02:11.051095 1266961 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1101 01:02:11.051105 1266961 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1101 01:02:11.051125 1266961 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1101 01:02:11.051138 1266961 command_runner.go:130] > # which might increase security.
	I1101 01:02:11.051145 1266961 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1101 01:02:11.051165 1266961 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1101 01:02:11.051179 1266961 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1101 01:02:11.051200 1266961 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1101 01:02:11.051215 1266961 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1101 01:02:11.051231 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:02:11.051243 1266961 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1101 01:02:11.051251 1266961 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1101 01:02:11.051260 1266961 command_runner.go:130] > # the cgroup blockio controller.
	I1101 01:02:11.051282 1266961 command_runner.go:130] > # blockio_config_file = ""
	I1101 01:02:11.051300 1266961 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1101 01:02:11.051312 1266961 command_runner.go:130] > # irqbalance daemon.
	I1101 01:02:11.051319 1266961 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1101 01:02:11.051332 1266961 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1101 01:02:11.051351 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:02:11.051364 1266961 command_runner.go:130] > # rdt_config_file = ""
	I1101 01:02:11.051380 1266961 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1101 01:02:11.052064 1266961 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1101 01:02:11.052080 1266961 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1101 01:02:11.052102 1266961 command_runner.go:130] > # separate_pull_cgroup = ""
	I1101 01:02:11.052115 1266961 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1101 01:02:11.052123 1266961 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1101 01:02:11.052130 1266961 command_runner.go:130] > # will be added.
	I1101 01:02:11.052136 1266961 command_runner.go:130] > # default_capabilities = [
	I1101 01:02:11.052143 1266961 command_runner.go:130] > # 	"CHOWN",
	I1101 01:02:11.052149 1266961 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1101 01:02:11.052154 1266961 command_runner.go:130] > # 	"FSETID",
	I1101 01:02:11.052162 1266961 command_runner.go:130] > # 	"FOWNER",
	I1101 01:02:11.052173 1266961 command_runner.go:130] > # 	"SETGID",
	I1101 01:02:11.052181 1266961 command_runner.go:130] > # 	"SETUID",
	I1101 01:02:11.052186 1266961 command_runner.go:130] > # 	"SETPCAP",
	I1101 01:02:11.052194 1266961 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1101 01:02:11.052199 1266961 command_runner.go:130] > # 	"KILL",
	I1101 01:02:11.052206 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.052215 1266961 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1101 01:02:11.052226 1266961 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1101 01:02:11.052233 1266961 command_runner.go:130] > # add_inheritable_capabilities = true
	I1101 01:02:11.052250 1266961 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1101 01:02:11.052262 1266961 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 01:02:11.052267 1266961 command_runner.go:130] > # default_sysctls = [
	I1101 01:02:11.052271 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.052277 1266961 command_runner.go:130] > # List of devices on the host that a
	I1101 01:02:11.052291 1266961 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1101 01:02:11.052296 1266961 command_runner.go:130] > # allowed_devices = [
	I1101 01:02:11.052303 1266961 command_runner.go:130] > # 	"/dev/fuse",
	I1101 01:02:11.052308 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.052322 1266961 command_runner.go:130] > # List of additional devices. specified as
	I1101 01:02:11.052343 1266961 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1101 01:02:11.052354 1266961 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1101 01:02:11.052361 1266961 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1101 01:02:11.052367 1266961 command_runner.go:130] > # additional_devices = [
	I1101 01:02:11.052376 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.052382 1266961 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1101 01:02:11.052396 1266961 command_runner.go:130] > # cdi_spec_dirs = [
	I1101 01:02:11.052405 1266961 command_runner.go:130] > # 	"/etc/cdi",
	I1101 01:02:11.052410 1266961 command_runner.go:130] > # 	"/var/run/cdi",
	I1101 01:02:11.052415 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.052426 1266961 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1101 01:02:11.052434 1266961 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1101 01:02:11.052443 1266961 command_runner.go:130] > # Defaults to false.
	I1101 01:02:11.052739 1266961 command_runner.go:130] > # device_ownership_from_security_context = false
	I1101 01:02:11.052772 1266961 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1101 01:02:11.052780 1266961 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1101 01:02:11.052791 1266961 command_runner.go:130] > # hooks_dir = [
	I1101 01:02:11.052797 1266961 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1101 01:02:11.052802 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.052811 1266961 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1101 01:02:11.052821 1266961 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1101 01:02:11.052834 1266961 command_runner.go:130] > # its default mounts from the following two files:
	I1101 01:02:11.052843 1266961 command_runner.go:130] > #
	I1101 01:02:11.052851 1266961 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1101 01:02:11.052862 1266961 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1101 01:02:11.052870 1266961 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1101 01:02:11.052877 1266961 command_runner.go:130] > #
	I1101 01:02:11.052885 1266961 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1101 01:02:11.052895 1266961 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1101 01:02:11.052911 1266961 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1101 01:02:11.052922 1266961 command_runner.go:130] > #      only add mounts it finds in this file.
	I1101 01:02:11.052927 1266961 command_runner.go:130] > #
	I1101 01:02:11.052932 1266961 command_runner.go:130] > # default_mounts_file = ""
	I1101 01:02:11.052941 1266961 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1101 01:02:11.052950 1266961 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1101 01:02:11.052955 1266961 command_runner.go:130] > # pids_limit = 0
	I1101 01:02:11.052965 1266961 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1101 01:02:11.052988 1266961 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1101 01:02:11.053004 1266961 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1101 01:02:11.053014 1266961 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1101 01:02:11.053023 1266961 command_runner.go:130] > # log_size_max = -1
	I1101 01:02:11.053033 1266961 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1101 01:02:11.053043 1266961 command_runner.go:130] > # log_to_journald = false
	I1101 01:02:11.053101 1266961 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1101 01:02:11.053116 1266961 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1101 01:02:11.053124 1266961 command_runner.go:130] > # Path to directory for container attach sockets.
	I1101 01:02:11.053133 1266961 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1101 01:02:11.053143 1266961 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1101 01:02:11.053148 1266961 command_runner.go:130] > # bind_mount_prefix = ""
	I1101 01:02:11.053155 1266961 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1101 01:02:11.053161 1266961 command_runner.go:130] > # read_only = false
	I1101 01:02:11.053180 1266961 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1101 01:02:11.053193 1266961 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1101 01:02:11.053198 1266961 command_runner.go:130] > # live configuration reload.
	I1101 01:02:11.053203 1266961 command_runner.go:130] > # log_level = "info"
	I1101 01:02:11.053213 1266961 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1101 01:02:11.053222 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:02:11.053230 1266961 command_runner.go:130] > # log_filter = ""
	I1101 01:02:11.053237 1266961 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1101 01:02:11.053307 1266961 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1101 01:02:11.053330 1266961 command_runner.go:130] > # separated by comma.
	I1101 01:02:11.053338 1266961 command_runner.go:130] > # uid_mappings = ""
	I1101 01:02:11.053347 1266961 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1101 01:02:11.053358 1266961 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1101 01:02:11.053363 1266961 command_runner.go:130] > # separated by comma.
	I1101 01:02:11.053367 1266961 command_runner.go:130] > # gid_mappings = ""
	I1101 01:02:11.053375 1266961 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1101 01:02:11.053385 1266961 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 01:02:11.053408 1266961 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 01:02:11.053417 1266961 command_runner.go:130] > # minimum_mappable_uid = -1
	I1101 01:02:11.053425 1266961 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1101 01:02:11.053435 1266961 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1101 01:02:11.053445 1266961 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1101 01:02:11.053454 1266961 command_runner.go:130] > # minimum_mappable_gid = -1
	I1101 01:02:11.053462 1266961 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1101 01:02:11.053475 1266961 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1101 01:02:11.053487 1266961 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1101 01:02:11.053495 1266961 command_runner.go:130] > # ctr_stop_timeout = 30
	I1101 01:02:11.053505 1266961 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1101 01:02:11.053513 1266961 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1101 01:02:11.053522 1266961 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1101 01:02:11.053528 1266961 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1101 01:02:11.053539 1266961 command_runner.go:130] > # drop_infra_ctr = true
	I1101 01:02:11.053553 1266961 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1101 01:02:11.053563 1266961 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1101 01:02:11.053573 1266961 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1101 01:02:11.053582 1266961 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1101 01:02:11.053589 1266961 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1101 01:02:11.053598 1266961 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1101 01:02:11.053603 1266961 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1101 01:02:11.053614 1266961 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1101 01:02:11.053625 1266961 command_runner.go:130] > # pinns_path = ""
	I1101 01:02:11.053636 1266961 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1101 01:02:11.053644 1266961 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1101 01:02:11.053654 1266961 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1101 01:02:11.053660 1266961 command_runner.go:130] > # default_runtime = "runc"
	I1101 01:02:11.053671 1266961 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1101 01:02:11.053680 1266961 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1101 01:02:11.053701 1266961 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1101 01:02:11.053711 1266961 command_runner.go:130] > # creation as a file is not desired either.
	I1101 01:02:11.053721 1266961 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1101 01:02:11.053730 1266961 command_runner.go:130] > # the hostname is being managed dynamically.
	I1101 01:02:11.053736 1266961 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1101 01:02:11.053740 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.053751 1266961 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1101 01:02:11.053762 1266961 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1101 01:02:11.053780 1266961 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1101 01:02:11.053792 1266961 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1101 01:02:11.053797 1266961 command_runner.go:130] > #
	I1101 01:02:11.053805 1266961 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1101 01:02:11.053811 1266961 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1101 01:02:11.053818 1266961 command_runner.go:130] > #  runtime_type = "oci"
	I1101 01:02:11.053824 1266961 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1101 01:02:11.053834 1266961 command_runner.go:130] > #  privileged_without_host_devices = false
	I1101 01:02:11.053843 1266961 command_runner.go:130] > #  allowed_annotations = []
	I1101 01:02:11.053853 1266961 command_runner.go:130] > # Where:
	I1101 01:02:11.053864 1266961 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1101 01:02:11.053872 1266961 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1101 01:02:11.053883 1266961 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1101 01:02:11.053891 1266961 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1101 01:02:11.053900 1266961 command_runner.go:130] > #   in $PATH.
	I1101 01:02:11.053907 1266961 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1101 01:02:11.053913 1266961 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1101 01:02:11.053929 1266961 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1101 01:02:11.053937 1266961 command_runner.go:130] > #   state.
	I1101 01:02:11.053945 1266961 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1101 01:02:11.053955 1266961 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1101 01:02:11.053963 1266961 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1101 01:02:11.053973 1266961 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1101 01:02:11.053981 1266961 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1101 01:02:11.054055 1266961 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1101 01:02:11.054080 1266961 command_runner.go:130] > #   The currently recognized values are:
	I1101 01:02:11.054090 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1101 01:02:11.054101 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1101 01:02:11.054109 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1101 01:02:11.054119 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1101 01:02:11.054132 1266961 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1101 01:02:11.054141 1266961 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1101 01:02:11.054157 1266961 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1101 01:02:11.054170 1266961 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1101 01:02:11.054177 1266961 command_runner.go:130] > #   should be moved to the container's cgroup
	I1101 01:02:11.054185 1266961 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1101 01:02:11.054192 1266961 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1101 01:02:11.054197 1266961 command_runner.go:130] > runtime_type = "oci"
	I1101 01:02:11.054205 1266961 command_runner.go:130] > runtime_root = "/run/runc"
	I1101 01:02:11.054212 1266961 command_runner.go:130] > runtime_config_path = ""
	I1101 01:02:11.054217 1266961 command_runner.go:130] > monitor_path = ""
	I1101 01:02:11.054230 1266961 command_runner.go:130] > monitor_cgroup = ""
	I1101 01:02:11.054236 1266961 command_runner.go:130] > monitor_exec_cgroup = ""
	I1101 01:02:11.054257 1266961 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1101 01:02:11.054268 1266961 command_runner.go:130] > # running containers
	I1101 01:02:11.054275 1266961 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1101 01:02:11.054282 1266961 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1101 01:02:11.054293 1266961 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1101 01:02:11.054307 1266961 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1101 01:02:11.054317 1266961 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1101 01:02:11.054323 1266961 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1101 01:02:11.054332 1266961 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1101 01:02:11.054337 1266961 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1101 01:02:11.054345 1266961 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1101 01:02:11.054354 1266961 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1101 01:02:11.054362 1266961 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1101 01:02:11.054376 1266961 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1101 01:02:11.054385 1266961 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1101 01:02:11.054397 1266961 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1101 01:02:11.054407 1266961 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1101 01:02:11.054418 1266961 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1101 01:02:11.054429 1266961 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1101 01:02:11.054443 1266961 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1101 01:02:11.054459 1266961 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1101 01:02:11.054468 1266961 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1101 01:02:11.054475 1266961 command_runner.go:130] > # Example:
	I1101 01:02:11.054482 1266961 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1101 01:02:11.054491 1266961 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1101 01:02:11.054497 1266961 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1101 01:02:11.054506 1266961 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1101 01:02:11.054511 1266961 command_runner.go:130] > # cpuset = 0
	I1101 01:02:11.054516 1266961 command_runner.go:130] > # cpushares = "0-1"
	I1101 01:02:11.054530 1266961 command_runner.go:130] > # Where:
	I1101 01:02:11.054542 1266961 command_runner.go:130] > # The workload name is workload-type.
	I1101 01:02:11.054551 1266961 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1101 01:02:11.054558 1266961 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1101 01:02:11.054567 1266961 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1101 01:02:11.054578 1266961 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1101 01:02:11.054588 1266961 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1101 01:02:11.054600 1266961 command_runner.go:130] > # 
	I1101 01:02:11.054613 1266961 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1101 01:02:11.054617 1266961 command_runner.go:130] > #
	I1101 01:02:11.054627 1266961 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1101 01:02:11.054635 1266961 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1101 01:02:11.054645 1266961 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1101 01:02:11.054656 1266961 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1101 01:02:11.054663 1266961 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1101 01:02:11.054678 1266961 command_runner.go:130] > [crio.image]
	I1101 01:02:11.054688 1266961 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1101 01:02:11.054696 1266961 command_runner.go:130] > # default_transport = "docker://"
	I1101 01:02:11.054704 1266961 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1101 01:02:11.054714 1266961 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1101 01:02:11.054719 1266961 command_runner.go:130] > # global_auth_file = ""
	I1101 01:02:11.054725 1266961 command_runner.go:130] > # The image used to instantiate infra containers.
	I1101 01:02:11.054802 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:02:11.054814 1266961 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1101 01:02:11.054823 1266961 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1101 01:02:11.054830 1266961 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1101 01:02:11.054842 1266961 command_runner.go:130] > # This option supports live configuration reload.
	I1101 01:02:11.054848 1266961 command_runner.go:130] > # pause_image_auth_file = ""
	I1101 01:02:11.054855 1266961 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1101 01:02:11.054874 1266961 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1101 01:02:11.054886 1266961 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1101 01:02:11.054894 1266961 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1101 01:02:11.054901 1266961 command_runner.go:130] > # pause_command = "/pause"
	I1101 01:02:11.054909 1266961 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1101 01:02:11.054917 1266961 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1101 01:02:11.054928 1266961 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1101 01:02:11.054936 1266961 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1101 01:02:11.054951 1266961 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1101 01:02:11.054959 1266961 command_runner.go:130] > # signature_policy = ""
	I1101 01:02:11.054967 1266961 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1101 01:02:11.054977 1266961 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1101 01:02:11.054983 1266961 command_runner.go:130] > # changing them here.
	I1101 01:02:11.054989 1266961 command_runner.go:130] > # insecure_registries = [
	I1101 01:02:11.054997 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.055005 1266961 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1101 01:02:11.055020 1266961 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1101 01:02:11.055029 1266961 command_runner.go:130] > # image_volumes = "mkdir"
	I1101 01:02:11.055036 1266961 command_runner.go:130] > # Temporary directory to use for storing big files
	I1101 01:02:11.055045 1266961 command_runner.go:130] > # big_files_temporary_dir = ""
	I1101 01:02:11.055052 1266961 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1101 01:02:11.055060 1266961 command_runner.go:130] > # CNI plugins.
	I1101 01:02:11.055065 1266961 command_runner.go:130] > [crio.network]
	I1101 01:02:11.055072 1266961 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1101 01:02:11.055079 1266961 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1101 01:02:11.055086 1266961 command_runner.go:130] > # cni_default_network = ""
	I1101 01:02:11.055099 1266961 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1101 01:02:11.055109 1266961 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1101 01:02:11.055116 1266961 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1101 01:02:11.055123 1266961 command_runner.go:130] > # plugin_dirs = [
	I1101 01:02:11.055128 1266961 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1101 01:02:11.055133 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.055143 1266961 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1101 01:02:11.055148 1266961 command_runner.go:130] > [crio.metrics]
	I1101 01:02:11.055154 1266961 command_runner.go:130] > # Globally enable or disable metrics support.
	I1101 01:02:11.055159 1266961 command_runner.go:130] > # enable_metrics = false
	I1101 01:02:11.055174 1266961 command_runner.go:130] > # Specify enabled metrics collectors.
	I1101 01:02:11.055184 1266961 command_runner.go:130] > # Per default all metrics are enabled.
	I1101 01:02:11.055192 1266961 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1101 01:02:11.055202 1266961 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1101 01:02:11.055210 1266961 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1101 01:02:11.055219 1266961 command_runner.go:130] > # metrics_collectors = [
	I1101 01:02:11.055224 1266961 command_runner.go:130] > # 	"operations",
	I1101 01:02:11.055233 1266961 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1101 01:02:11.055245 1266961 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1101 01:02:11.055254 1266961 command_runner.go:130] > # 	"operations_errors",
	I1101 01:02:11.055260 1266961 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1101 01:02:11.055268 1266961 command_runner.go:130] > # 	"image_pulls_by_name",
	I1101 01:02:11.055274 1266961 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1101 01:02:11.055279 1266961 command_runner.go:130] > # 	"image_pulls_failures",
	I1101 01:02:11.055289 1266961 command_runner.go:130] > # 	"image_pulls_successes",
	I1101 01:02:11.055294 1266961 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1101 01:02:11.055302 1266961 command_runner.go:130] > # 	"image_layer_reuse",
	I1101 01:02:11.055307 1266961 command_runner.go:130] > # 	"containers_oom_total",
	I1101 01:02:11.055312 1266961 command_runner.go:130] > # 	"containers_oom",
	I1101 01:02:11.055325 1266961 command_runner.go:130] > # 	"processes_defunct",
	I1101 01:02:11.055330 1266961 command_runner.go:130] > # 	"operations_total",
	I1101 01:02:11.055336 1266961 command_runner.go:130] > # 	"operations_latency_seconds",
	I1101 01:02:11.055344 1266961 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1101 01:02:11.055350 1266961 command_runner.go:130] > # 	"operations_errors_total",
	I1101 01:02:11.055357 1266961 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1101 01:02:11.055363 1266961 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1101 01:02:11.055371 1266961 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1101 01:02:11.055377 1266961 command_runner.go:130] > # 	"image_pulls_success_total",
	I1101 01:02:11.055382 1266961 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1101 01:02:11.055396 1266961 command_runner.go:130] > # 	"containers_oom_count_total",
	I1101 01:02:11.055401 1266961 command_runner.go:130] > # ]
	I1101 01:02:11.055407 1266961 command_runner.go:130] > # The port on which the metrics server will listen.
	I1101 01:02:11.055413 1266961 command_runner.go:130] > # metrics_port = 9090
	I1101 01:02:11.055422 1266961 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1101 01:02:11.055427 1266961 command_runner.go:130] > # metrics_socket = ""
	I1101 01:02:11.055435 1266961 command_runner.go:130] > # The certificate for the secure metrics server.
	I1101 01:02:11.055446 1266961 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1101 01:02:11.055453 1266961 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1101 01:02:11.055469 1266961 command_runner.go:130] > # certificate on any modification event.
	I1101 01:02:11.055477 1266961 command_runner.go:130] > # metrics_cert = ""
	I1101 01:02:11.055483 1266961 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1101 01:02:11.055490 1266961 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1101 01:02:11.055495 1266961 command_runner.go:130] > # metrics_key = ""
	I1101 01:02:11.055504 1266961 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1101 01:02:11.055512 1266961 command_runner.go:130] > [crio.tracing]
	I1101 01:02:11.055519 1266961 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1101 01:02:11.055526 1266961 command_runner.go:130] > # enable_tracing = false
	I1101 01:02:11.055533 1266961 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1101 01:02:11.055546 1266961 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1101 01:02:11.055553 1266961 command_runner.go:130] > # Number of samples to collect per million spans.
	I1101 01:02:11.055564 1266961 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1101 01:02:11.055636 1266961 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1101 01:02:11.055650 1266961 command_runner.go:130] > [crio.stats]
	I1101 01:02:11.055658 1266961 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1101 01:02:11.055665 1266961 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1101 01:02:11.055671 1266961 command_runner.go:130] > # stats_collection_period = 0
	I1101 01:02:11.057766 1266961 command_runner.go:130] ! time="2023-11-01 01:02:11.046449234Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1101 01:02:11.057806 1266961 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1101 01:02:11.057867 1266961 cni.go:84] Creating CNI manager for ""
	I1101 01:02:11.057885 1266961 cni.go:136] 2 nodes found, recommending kindnet
	I1101 01:02:11.057894 1266961 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1101 01:02:11.057919 1266961 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-291182 NodeName:multinode-291182-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 01:02:11.058064 1266961 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-291182-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 01:02:11.058134 1266961 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-291182-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1101 01:02:11.058210 1266961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1101 01:02:11.067626 1266961 command_runner.go:130] > kubeadm
	I1101 01:02:11.067708 1266961 command_runner.go:130] > kubectl
	I1101 01:02:11.067721 1266961 command_runner.go:130] > kubelet
	I1101 01:02:11.068834 1266961 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 01:02:11.068932 1266961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1101 01:02:11.079600 1266961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1101 01:02:11.101316 1266961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 01:02:11.122222 1266961 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1101 01:02:11.126602 1266961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 01:02:11.139897 1266961 host.go:66] Checking if "multinode-291182" exists ...
	I1101 01:02:11.140202 1266961 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:02:11.140438 1266961 start.go:304] JoinCluster: &{Name:multinode-291182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-291182 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 01:02:11.140532 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1101 01:02:11.140618 1266961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:02:11.158483 1266961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:02:11.339905 1266961 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ptr051.pot6xj9yf72panzf --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 
	I1101 01:02:11.339969 1266961 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 01:02:11.340014 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ptr051.pot6xj9yf72panzf --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-291182-m02"
	I1101 01:02:11.386109 1266961 command_runner.go:130] > [preflight] Running pre-flight checks
	I1101 01:02:11.422825 1266961 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1101 01:02:11.422848 1266961 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1101 01:02:11.422855 1266961 command_runner.go:130] > OS: Linux
	I1101 01:02:11.422861 1266961 command_runner.go:130] > CGROUPS_CPU: enabled
	I1101 01:02:11.422868 1266961 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1101 01:02:11.422882 1266961 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1101 01:02:11.422890 1266961 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1101 01:02:11.422896 1266961 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1101 01:02:11.422907 1266961 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1101 01:02:11.422915 1266961 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1101 01:02:11.422922 1266961 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1101 01:02:11.422928 1266961 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1101 01:02:11.534685 1266961 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1101 01:02:11.534708 1266961 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1101 01:02:11.564192 1266961 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 01:02:11.564463 1266961 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 01:02:11.564475 1266961 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1101 01:02:11.668132 1266961 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1101 01:02:14.696515 1266961 command_runner.go:130] > This node has joined the cluster:
	I1101 01:02:14.696546 1266961 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1101 01:02:14.696555 1266961 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1101 01:02:14.696579 1266961 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1101 01:02:14.699688 1266961 command_runner.go:130] ! W1101 01:02:11.385716    1035 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1101 01:02:14.699721 1266961 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1101 01:02:14.699733 1266961 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 01:02:14.699746 1266961 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ptr051.pot6xj9yf72panzf --discovery-token-ca-cert-hash sha256:3922e75285c67fab1116b614362234745af70cc8c941ea9944c97ac3e3b5f568 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-291182-m02": (3.359714529s)
	I1101 01:02:14.699766 1266961 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1101 01:02:14.946252 1266961 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1101 01:02:14.946279 1266961 start.go:306] JoinCluster complete in 3.80583955s
	I1101 01:02:14.946290 1266961 cni.go:84] Creating CNI manager for ""
	I1101 01:02:14.946296 1266961 cni.go:136] 2 nodes found, recommending kindnet
	I1101 01:02:14.946346 1266961 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 01:02:14.951145 1266961 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1101 01:02:14.951165 1266961 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1101 01:02:14.951173 1266961 command_runner.go:130] > Device: 3ah/58d	Inode: 1827008     Links: 1
	I1101 01:02:14.951181 1266961 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1101 01:02:14.951188 1266961 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1101 01:02:14.951194 1266961 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1101 01:02:14.951200 1266961 command_runner.go:130] > Change: 2023-11-01 00:32:33.764020799 +0000
	I1101 01:02:14.951207 1266961 command_runner.go:130] >  Birth: 2023-11-01 00:32:33.720021119 +0000
	I1101 01:02:14.951464 1266961 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1101 01:02:14.951476 1266961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1101 01:02:14.977066 1266961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 01:02:15.441327 1266961 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1101 01:02:15.441353 1266961 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1101 01:02:15.441372 1266961 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1101 01:02:15.441379 1266961 command_runner.go:130] > daemonset.apps/kindnet configured
	I1101 01:02:15.441764 1266961 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:02:15.442021 1266961 kapi.go:59] client config for multinode-291182: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 01:02:15.442366 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1101 01:02:15.442383 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:15.442392 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:15.442400 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:15.444876 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:15.444898 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:15.444907 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:15.444914 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:15.444923 1266961 round_trippers.go:580]     Content-Length: 291
	I1101 01:02:15.444930 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:15 GMT
	I1101 01:02:15.444941 1266961 round_trippers.go:580]     Audit-Id: c8a5a5a6-be41-450a-ac2b-082951221857
	I1101 01:02:15.444947 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:15.444954 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:15.445170 1266961 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8e4f95f0-392d-400c-bda6-a37388e1041b","resourceVersion":"445","creationTimestamp":"2023-11-01T01:01:14Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1101 01:02:15.445271 1266961 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-291182" context rescaled to 1 replicas
	I1101 01:02:15.445302 1266961 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1101 01:02:15.448102 1266961 out.go:177] * Verifying Kubernetes components...
	I1101 01:02:15.449901 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:02:15.464618 1266961 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:02:15.464876 1266961 kapi.go:59] client config for multinode-291182: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.crt", KeyFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/multinode-291182/client.key", CAFile:"/home/jenkins/minikube-integration/17486-1197516/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16bdf70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 01:02:15.465181 1266961 node_ready.go:35] waiting up to 6m0s for node "multinode-291182-m02" to be "Ready" ...
	I1101 01:02:15.465252 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:15.465264 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:15.465273 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:15.465281 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:15.467936 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:15.467959 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:15.467967 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:15.467974 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:15.467981 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:15.467988 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:15.467998 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:15 GMT
	I1101 01:02:15.468005 1266961 round_trippers.go:580]     Audit-Id: 037fe777-935b-4480-a1e1-d4a72b2fab10
	I1101 01:02:15.468244 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"482","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1101 01:02:15.468689 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:15.468703 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:15.468712 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:15.468725 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:15.470972 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:15.470990 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:15.470998 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:15.471005 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:15 GMT
	I1101 01:02:15.471011 1266961 round_trippers.go:580]     Audit-Id: f03bb320-8f60-4c20-944d-43e7f4563a1f
	I1101 01:02:15.471017 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:15.471024 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:15.471030 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:15.471177 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"482","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1101 01:02:15.972209 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:15.972231 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:15.972241 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:15.972248 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:15.975703 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:02:15.975723 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:15.975732 1266961 round_trippers.go:580]     Audit-Id: 31dbe310-da4a-44bc-8d41-049e416fdb6e
	I1101 01:02:15.975738 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:15.975745 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:15.975751 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:15.975758 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:15.975765 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:15 GMT
	I1101 01:02:15.975914 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"482","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1101 01:02:16.472668 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:16.472694 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:16.472705 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:16.472712 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:16.475161 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:16.475186 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:16.475196 1266961 round_trippers.go:580]     Audit-Id: 2c1a5826-ee31-49ce-9057-65430716cf2a
	I1101 01:02:16.475203 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:16.475210 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:16.475218 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:16.475224 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:16.475235 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:16 GMT
	I1101 01:02:16.475322 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:16.971733 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:16.971759 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:16.971770 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:16.971777 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:16.974347 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:16.974371 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:16.974380 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:16.974387 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:16.974394 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:16.974400 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:16.974407 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:16 GMT
	I1101 01:02:16.974414 1266961 round_trippers.go:580]     Audit-Id: c5a3e9ba-80e8-4973-9f8e-bc4af4cac0a1
	I1101 01:02:16.974620 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:17.471939 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:17.471969 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:17.471979 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:17.471986 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:17.474523 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:17.474543 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:17.474552 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:17.474559 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:17.474566 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:17 GMT
	I1101 01:02:17.474576 1266961 round_trippers.go:580]     Audit-Id: 69e553ec-43e9-4477-bb72-658f7fcee7cd
	I1101 01:02:17.474585 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:17.474607 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:17.474956 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:17.475326 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:17.971819 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:17.971842 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:17.971852 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:17.971859 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:17.974370 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:17.974393 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:17.974402 1266961 round_trippers.go:580]     Audit-Id: 0f370407-2adc-4ed6-8b84-1268cc0a8ea3
	I1101 01:02:17.974408 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:17.974414 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:17.974421 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:17.974429 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:17.974438 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:17 GMT
	I1101 01:02:17.974645 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:18.471665 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:18.471685 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:18.471696 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:18.471703 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:18.474170 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:18.474195 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:18.474205 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:18.474211 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:18 GMT
	I1101 01:02:18.474218 1266961 round_trippers.go:580]     Audit-Id: 93a6c207-8dd8-4964-98ca-b5b928add598
	I1101 01:02:18.474225 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:18.474231 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:18.474240 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:18.474402 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:18.972399 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:18.972427 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:18.972437 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:18.972449 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:18.974941 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:18.974967 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:18.974975 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:18.974982 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:18.974989 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:18 GMT
	I1101 01:02:18.974996 1266961 round_trippers.go:580]     Audit-Id: 18752646-7308-42ce-97fd-f67a4bc8e8ca
	I1101 01:02:18.975007 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:18.975014 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:18.975359 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:19.471717 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:19.471742 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:19.471751 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:19.471758 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:19.477680 1266961 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 01:02:19.477702 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:19.477712 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:19 GMT
	I1101 01:02:19.477719 1266961 round_trippers.go:580]     Audit-Id: 382fead0-2536-4734-b07c-530d20760f4b
	I1101 01:02:19.477726 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:19.477732 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:19.477742 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:19.477755 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:19.478012 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:19.478397 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:19.972650 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:19.972678 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:19.972688 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:19.972696 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:19.975117 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:19.975139 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:19.975147 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:19.975154 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:19.975160 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:19.975168 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:19 GMT
	I1101 01:02:19.975178 1266961 round_trippers.go:580]     Audit-Id: 96ad9a9f-f9d1-486e-a00b-9d5c6e448599
	I1101 01:02:19.975185 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:19.975417 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:20.472488 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:20.472512 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:20.472522 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:20.472529 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:20.475000 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:20.475024 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:20.475032 1266961 round_trippers.go:580]     Audit-Id: 415860fc-d7bc-4620-82ed-ff38d9513eb6
	I1101 01:02:20.475039 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:20.475045 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:20.475051 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:20.475059 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:20.475071 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:20 GMT
	I1101 01:02:20.475306 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:20.972198 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:20.972222 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:20.972233 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:20.972241 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:20.974709 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:20.974737 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:20.974746 1266961 round_trippers.go:580]     Audit-Id: 61ecb618-846f-4975-8286-be5be1142dda
	I1101 01:02:20.974752 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:20.974758 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:20.974765 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:20.974771 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:20.974781 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:20 GMT
	I1101 01:02:20.975165 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:21.471771 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:21.471797 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:21.471808 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:21.471815 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:21.474378 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:21.474398 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:21.474408 1266961 round_trippers.go:580]     Audit-Id: 4e912085-982b-47c0-88bd-5e3459cb8f75
	I1101 01:02:21.474414 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:21.474421 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:21.474427 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:21.474434 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:21.474440 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:21 GMT
	I1101 01:02:21.474551 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:21.972362 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:21.972386 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:21.972396 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:21.972403 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:21.975106 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:21.975131 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:21.975141 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:21.975148 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:21.975154 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:21.975160 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:21.975172 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:21 GMT
	I1101 01:02:21.975178 1266961 round_trippers.go:580]     Audit-Id: 52c6b4fb-6d10-4488-a254-d12b04fe8ad9
	I1101 01:02:21.975413 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:21.975776 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:22.472607 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:22.472629 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:22.472640 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:22.472647 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:22.475154 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:22.475178 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:22.475187 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:22.475194 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:22.475200 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:22.475209 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:22 GMT
	I1101 01:02:22.475217 1266961 round_trippers.go:580]     Audit-Id: 44221460-feca-41e1-8146-2cb7ab34ca49
	I1101 01:02:22.475223 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:22.475558 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:22.972280 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:22.972306 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:22.972317 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:22.972324 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:22.974980 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:22.975003 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:22.975015 1266961 round_trippers.go:580]     Audit-Id: c025cbbd-7f2f-422b-b7d9-7507cdc7d1c2
	I1101 01:02:22.975022 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:22.975028 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:22.975034 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:22.975040 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:22.975047 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:22 GMT
	I1101 01:02:22.975390 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:23.472052 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:23.472073 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:23.472084 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:23.472098 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:23.474609 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:23.474704 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:23.474717 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:23.474725 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:23.474731 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:23.474737 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:23.474743 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:23 GMT
	I1101 01:02:23.474750 1266961 round_trippers.go:580]     Audit-Id: 93470d33-2f67-499b-acb1-e5a6c97db6aa
	I1101 01:02:23.474976 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:23.971991 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:23.972015 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:23.972025 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:23.972033 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:23.974372 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:23.974395 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:23.974403 1266961 round_trippers.go:580]     Audit-Id: 8fb97817-89c9-4031-91c6-bdcb8de2afff
	I1101 01:02:23.974409 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:23.974416 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:23.974422 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:23.974429 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:23.974436 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:23 GMT
	I1101 01:02:23.974553 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"498","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1101 01:02:24.476822 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:24.476896 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:24.476925 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:24.476948 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:24.479587 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:24.479612 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:24.479621 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:24 GMT
	I1101 01:02:24.479628 1266961 round_trippers.go:580]     Audit-Id: 0fb3954d-3a30-4220-96c9-9c9fb9b20493
	I1101 01:02:24.479634 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:24.479641 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:24.479648 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:24.479655 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:24.479766 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:24.480143 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:24.971707 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:24.971732 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:24.971743 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:24.971750 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:24.975841 1266961 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1101 01:02:24.975868 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:24.975878 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:24.975885 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:24.975891 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:24.975898 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:24 GMT
	I1101 01:02:24.975905 1266961 round_trippers.go:580]     Audit-Id: e0e89310-d9f4-4165-9ee5-3e60f3d16a72
	I1101 01:02:24.975911 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:24.976422 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:25.472488 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:25.472511 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:25.472524 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:25.472531 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:25.474876 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:25.474906 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:25.474914 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:25.474921 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:25.474927 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:25.474933 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:25 GMT
	I1101 01:02:25.474939 1266961 round_trippers.go:580]     Audit-Id: 2f8b9c61-5345-4fb6-aaf4-d52fc9b8257a
	I1101 01:02:25.474945 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:25.475031 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:25.972659 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:25.972685 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:25.972696 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:25.972712 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:25.975365 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:25.975393 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:25.975402 1266961 round_trippers.go:580]     Audit-Id: ff3baa7b-69fa-4d53-877e-da5885f533c0
	I1101 01:02:25.975410 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:25.975416 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:25.975422 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:25.975429 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:25.975436 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:25 GMT
	I1101 01:02:25.975539 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:26.471692 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:26.471718 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:26.471729 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:26.471737 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:26.474178 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:26.474267 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:26.474291 1266961 round_trippers.go:580]     Audit-Id: fe55131a-ce78-4b5f-a618-ef9534901657
	I1101 01:02:26.474328 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:26.474354 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:26.474379 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:26.474409 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:26.474417 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:26 GMT
	I1101 01:02:26.474532 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:26.971919 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:26.971943 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:26.971954 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:26.971962 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:26.974426 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:26.974446 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:26.974453 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:26.974460 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:26.974466 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:26.974472 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:26.974478 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:26 GMT
	I1101 01:02:26.974486 1266961 round_trippers.go:580]     Audit-Id: 93af12d6-91ed-4eb9-887b-86ca581aaf61
	I1101 01:02:26.974697 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:26.975075 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:27.472577 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:27.472599 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:27.472610 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:27.472622 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:27.475130 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:27.475154 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:27.475163 1266961 round_trippers.go:580]     Audit-Id: ec80c2e3-23d5-4a7f-973b-9e5dfb629167
	I1101 01:02:27.475169 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:27.475176 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:27.475183 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:27.475189 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:27.475195 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:27 GMT
	I1101 01:02:27.475294 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:27.972450 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:27.972474 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:27.972490 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:27.972499 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:27.974957 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:27.974982 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:27.974991 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:27.974997 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:27.975003 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:27 GMT
	I1101 01:02:27.975009 1266961 round_trippers.go:580]     Audit-Id: b3d6632d-b08b-4f56-bbc7-e8f2e7a2e230
	I1101 01:02:27.975019 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:27.975026 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:27.975282 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:28.472361 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:28.472408 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:28.472419 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:28.472427 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:28.477572 1266961 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1101 01:02:28.477596 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:28.477605 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:28.477611 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:28.477618 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:28.477624 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:28 GMT
	I1101 01:02:28.477630 1266961 round_trippers.go:580]     Audit-Id: 572ef84c-59f7-4b7e-ae2b-314b887a4d11
	I1101 01:02:28.477636 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:28.478144 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:28.972125 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:28.972148 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:28.972159 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:28.972166 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:28.974686 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:28.974705 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:28.974713 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:28.974720 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:28.974727 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:28 GMT
	I1101 01:02:28.974733 1266961 round_trippers.go:580]     Audit-Id: 8b229008-24d2-4cfb-bf91-7c54416c8606
	I1101 01:02:28.974739 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:28.974745 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:28.974931 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:28.975302 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:29.472630 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:29.472655 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:29.472666 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:29.472673 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:29.475362 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:29.475381 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:29.475389 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:29.475395 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:29.475402 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:29 GMT
	I1101 01:02:29.475409 1266961 round_trippers.go:580]     Audit-Id: 4ba47980-f433-4c4c-9d59-394519b40897
	I1101 01:02:29.475415 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:29.475421 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:29.475567 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:29.972256 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:29.972280 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:29.972292 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:29.972300 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:29.974781 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:29.974856 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:29.974879 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:29.974910 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:29 GMT
	I1101 01:02:29.974957 1266961 round_trippers.go:580]     Audit-Id: 1e986987-3855-46c5-b881-d8e377213151
	I1101 01:02:29.974983 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:29.975024 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:29.975049 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:29.975268 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:30.472627 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:30.472671 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:30.472682 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:30.472689 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:30.475111 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:30.475135 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:30.475146 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:30.475153 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:30.475159 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:30.475166 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:30.475172 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:30 GMT
	I1101 01:02:30.475183 1266961 round_trippers.go:580]     Audit-Id: 9875af5a-4856-4bbb-9a25-2c4d45029bca
	I1101 01:02:30.475291 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:30.972463 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:30.972493 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:30.972503 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:30.972516 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:30.975136 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:30.975156 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:30.975164 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:30.975170 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:30.975176 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:30.975182 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:30.975189 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:30 GMT
	I1101 01:02:30.975195 1266961 round_trippers.go:580]     Audit-Id: 233daab8-b3e3-4c7c-916f-38e71d3c1cf3
	I1101 01:02:30.975334 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:30.975757 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:31.472096 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:31.472119 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:31.472130 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:31.472137 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:31.474643 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:31.474667 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:31.474676 1266961 round_trippers.go:580]     Audit-Id: e2b192dc-2d70-4780-becf-4a6a2c08046e
	I1101 01:02:31.474683 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:31.474689 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:31.474695 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:31.474703 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:31.474709 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:31 GMT
	I1101 01:02:31.474808 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:31.971723 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:31.971748 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:31.971758 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:31.971766 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:31.974299 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:31.974328 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:31.974337 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:31.974343 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:31 GMT
	I1101 01:02:31.974350 1266961 round_trippers.go:580]     Audit-Id: a6d3abc1-a507-4935-940f-77edd45e68e3
	I1101 01:02:31.974355 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:31.974362 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:31.974368 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:31.974487 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:32.472685 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:32.472707 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:32.472718 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:32.472726 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:32.475306 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:32.475327 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:32.475336 1266961 round_trippers.go:580]     Audit-Id: 22e9ef98-901e-4c1c-a82a-5ddd2fc887a5
	I1101 01:02:32.475342 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:32.475349 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:32.475355 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:32.475362 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:32.475369 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:32 GMT
	I1101 01:02:32.476648 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:32.972301 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:32.972323 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:32.972333 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:32.972341 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:32.974843 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:32.974868 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:32.974877 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:32.974883 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:32.974890 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:32.974896 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:32 GMT
	I1101 01:02:32.974902 1266961 round_trippers.go:580]     Audit-Id: dd36fe93-87d6-4922-bd99-575774c02a05
	I1101 01:02:32.974909 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:32.975163 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:33.472236 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:33.472258 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:33.472268 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:33.472276 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:33.474756 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:33.474780 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:33.474788 1266961 round_trippers.go:580]     Audit-Id: d6f513df-5cfd-48c8-8bad-262324b80bd0
	I1101 01:02:33.474795 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:33.474801 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:33.474807 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:33.474813 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:33.474821 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:33 GMT
	I1101 01:02:33.474925 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:33.475286 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:33.971727 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:33.971751 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:33.971763 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:33.971770 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:33.974227 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:33.974246 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:33.974255 1266961 round_trippers.go:580]     Audit-Id: 834d350d-071f-48f8-9a50-100dd6ec1bbf
	I1101 01:02:33.974262 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:33.974268 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:33.974274 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:33.974281 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:33.974288 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:33 GMT
	I1101 01:02:33.974494 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:34.472565 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:34.472590 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:34.472600 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:34.472607 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:34.475464 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:34.475487 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:34.475496 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:34.475503 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:34.475510 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:34.475516 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:34.475522 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:34 GMT
	I1101 01:02:34.475528 1266961 round_trippers.go:580]     Audit-Id: 7aaf5f5c-2a39-4301-8924-43e82297390e
	I1101 01:02:34.475636 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:34.972290 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:34.972316 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:34.972326 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:34.972334 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:34.974760 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:34.974847 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:34.974867 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:34 GMT
	I1101 01:02:34.974875 1266961 round_trippers.go:580]     Audit-Id: ef350c1e-8465-4e4a-a8fe-2f854318f795
	I1101 01:02:34.974899 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:34.974907 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:34.974921 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:34.974928 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:34.975047 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:35.472531 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:35.472555 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:35.472565 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:35.472573 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:35.475743 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:02:35.475805 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:35.475822 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:35.475829 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:35 GMT
	I1101 01:02:35.475840 1266961 round_trippers.go:580]     Audit-Id: dbe23b16-fe43-4c01-9a1b-51d7beda75bf
	I1101 01:02:35.475847 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:35.475853 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:35.475859 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:35.475976 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:35.476368 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:35.972387 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:35.972418 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:35.972428 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:35.972437 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:35.974831 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:35.974857 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:35.974866 1266961 round_trippers.go:580]     Audit-Id: 06e5d098-ff6c-46f3-8124-d18ae3630cdf
	I1101 01:02:35.974873 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:35.974879 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:35.974885 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:35.974893 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:35.974906 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:35 GMT
	I1101 01:02:35.975015 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:36.472116 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:36.472154 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:36.472165 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:36.472173 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:36.474788 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:36.474811 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:36.474820 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:36.474826 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:36.474833 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:36.474839 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:36.474845 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:36 GMT
	I1101 01:02:36.474851 1266961 round_trippers.go:580]     Audit-Id: 3e239725-5cac-4547-af0d-4c674bae5310
	I1101 01:02:36.475042 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:36.971718 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:36.971741 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:36.971751 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:36.971758 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:36.974221 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:36.974246 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:36.974254 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:36.974261 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:36.974268 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:36 GMT
	I1101 01:02:36.974274 1266961 round_trippers.go:580]     Audit-Id: 00f4f7f5-4cb0-48f6-9a1f-598a78d7ddf7
	I1101 01:02:36.974285 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:36.974298 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:36.974425 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:37.472478 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:37.472504 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:37.472514 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:37.472528 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:37.474899 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:37.474923 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:37.474932 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:37.474939 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:37 GMT
	I1101 01:02:37.474945 1266961 round_trippers.go:580]     Audit-Id: c242ffbc-2554-4814-ab3c-57003b9cdb46
	I1101 01:02:37.474951 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:37.474958 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:37.474969 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:37.475076 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:37.972037 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:37.972062 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:37.972073 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:37.972081 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:37.974462 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:37.974482 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:37.974490 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:37.974496 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:37.974502 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:37.974508 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:37.974515 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:37 GMT
	I1101 01:02:37.974521 1266961 round_trippers.go:580]     Audit-Id: 456e2302-86b2-41fb-a1ed-b77818fa2323
	I1101 01:02:37.974636 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:37.974999 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:38.471713 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:38.471738 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:38.471753 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:38.471760 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:38.474511 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:38.474530 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:38.474538 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:38.474545 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:38.474551 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:38 GMT
	I1101 01:02:38.474557 1266961 round_trippers.go:580]     Audit-Id: 5cf7daf5-7c64-470e-862f-8ad51d370dea
	I1101 01:02:38.474563 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:38.474573 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:38.474668 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:38.971680 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:38.971706 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:38.971716 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:38.971725 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:38.974210 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:38.974236 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:38.974244 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:38.974251 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:38.974257 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:38.974264 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:38 GMT
	I1101 01:02:38.974270 1266961 round_trippers.go:580]     Audit-Id: d0ebc92c-8e7f-489c-96ab-6b7331eab94e
	I1101 01:02:38.974276 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:38.974442 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:39.472550 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:39.472575 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:39.472586 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:39.472593 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:39.475067 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:39.475087 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:39.475096 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:39.475111 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:39.475117 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:39 GMT
	I1101 01:02:39.475124 1266961 round_trippers.go:580]     Audit-Id: b4f4698b-45e4-4139-9fdf-b794aae4369b
	I1101 01:02:39.475130 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:39.475136 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:39.475245 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:39.971921 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:39.971943 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:39.971953 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:39.971961 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:39.974337 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:39.974364 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:39.974373 1266961 round_trippers.go:580]     Audit-Id: d97e17e0-f153-4b81-8737-ff3827b5c04d
	I1101 01:02:39.974381 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:39.974387 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:39.974393 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:39.974399 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:39.974405 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:39 GMT
	I1101 01:02:39.974536 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:40.472322 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:40.472346 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:40.472356 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:40.472364 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:40.474791 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:40.474819 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:40.474827 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:40.474834 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:40.474841 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:40.474847 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:40 GMT
	I1101 01:02:40.474854 1266961 round_trippers.go:580]     Audit-Id: 9220936c-2b8c-4d3a-915a-9a556e95f305
	I1101 01:02:40.474863 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:40.474967 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:40.475332 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:40.971719 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:40.971741 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:40.971751 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:40.971759 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:40.974171 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:40.974195 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:40.974204 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:40.974211 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:40.974217 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:40.974224 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:40 GMT
	I1101 01:02:40.974230 1266961 round_trippers.go:580]     Audit-Id: e010b9ed-57b5-4a27-a063-19952e90ba73
	I1101 01:02:40.974236 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:40.974329 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:41.472455 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:41.472479 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:41.472489 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:41.472507 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:41.475096 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:41.475125 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:41.475134 1266961 round_trippers.go:580]     Audit-Id: ca2e1646-8d8d-46e9-8e3a-31f9b9a11cbe
	I1101 01:02:41.475141 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:41.475148 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:41.475154 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:41.475160 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:41.475167 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:41 GMT
	I1101 01:02:41.475259 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:41.972411 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:41.972433 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:41.972453 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:41.972461 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:41.975110 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:41.975135 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:41.975144 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:41.975151 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:41.975158 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:41 GMT
	I1101 01:02:41.975164 1266961 round_trippers.go:580]     Audit-Id: de28f852-e530-4dee-bdf1-15848013f7fb
	I1101 01:02:41.975170 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:41.975176 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:41.975305 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:42.472492 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:42.472521 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:42.472532 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:42.472539 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:42.475054 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:42.475075 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:42.475083 1266961 round_trippers.go:580]     Audit-Id: 7e51db1e-64d4-49ff-9115-edf3d179b18a
	I1101 01:02:42.475089 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:42.475095 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:42.475101 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:42.475107 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:42.475114 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:42 GMT
	I1101 01:02:42.475213 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:42.475575 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:42.972044 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:42.972068 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:42.972078 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:42.972086 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:42.974546 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:42.974573 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:42.974587 1266961 round_trippers.go:580]     Audit-Id: b44580fc-8d3f-452c-a62a-9c8d77f242c9
	I1101 01:02:42.974595 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:42.974603 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:42.974610 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:42.974616 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:42.974622 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:42 GMT
	I1101 01:02:42.975034 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:43.471649 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:43.471672 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:43.471682 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:43.471690 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:43.474169 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:43.474187 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:43.474196 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:43 GMT
	I1101 01:02:43.474204 1266961 round_trippers.go:580]     Audit-Id: 05a967dd-fb6d-4741-9411-a20f2c88ac8d
	I1101 01:02:43.474210 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:43.474216 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:43.474222 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:43.474228 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:43.474421 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:43.971754 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:43.971778 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:43.971789 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:43.971796 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:43.974275 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:43.974300 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:43.974310 1266961 round_trippers.go:580]     Audit-Id: 091b9e04-a41a-4e91-a435-d10b262fbcae
	I1101 01:02:43.974317 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:43.974323 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:43.974329 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:43.974335 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:43.974346 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:43 GMT
	I1101 01:02:43.974469 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:44.472557 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:44.472582 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:44.472593 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:44.472600 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:44.475388 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:44.475414 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:44.475424 1266961 round_trippers.go:580]     Audit-Id: 7ad4cb29-cccb-4b41-a6fc-6cf033aadaa0
	I1101 01:02:44.475431 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:44.475437 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:44.475443 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:44.475450 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:44.475456 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:44 GMT
	I1101 01:02:44.475542 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:44.475902 1266961 node_ready.go:58] node "multinode-291182-m02" has status "Ready":"False"
	I1101 01:02:44.972638 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:44.972661 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:44.972671 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:44.972679 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:44.975198 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:44.975225 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:44.975234 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:44.975241 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:44.975247 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:44.975254 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:44 GMT
	I1101 01:02:44.975260 1266961 round_trippers.go:580]     Audit-Id: a809c536-266a-4a52-b5a5-80e74c78e0d6
	I1101 01:02:44.975266 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:44.975395 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:45.472082 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:45.472106 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:45.472117 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:45.472126 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:45.474697 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:45.474718 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:45.474727 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:45.474733 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:45.474739 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:45.474745 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:45.474752 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:45 GMT
	I1101 01:02:45.474758 1266961 round_trippers.go:580]     Audit-Id: 18c4695e-627e-41a9-b55c-3ed23f9cffcd
	I1101 01:02:45.474863 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:45.971925 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:45.971957 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:45.971968 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:45.971976 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:45.980267 1266961 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1101 01:02:45.980293 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:45.980302 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:45.980309 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:45.980315 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:45.980326 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:45 GMT
	I1101 01:02:45.980332 1266961 round_trippers.go:580]     Audit-Id: e1204fe0-5734-4abd-8a86-1728993c5bb2
	I1101 01:02:45.980344 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:45.980464 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"506","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1101 01:02:46.472464 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:46.472491 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.472506 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.472515 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.475069 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.475091 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.475100 1266961 round_trippers.go:580]     Audit-Id: 7d9539d3-179e-4a4a-9c1f-34b662ee3c19
	I1101 01:02:46.475107 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.475113 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.475119 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.475127 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.475134 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.475210 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"530","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5258 chars]
	I1101 01:02:46.475566 1266961 node_ready.go:49] node "multinode-291182-m02" has status "Ready":"True"
	I1101 01:02:46.475587 1266961 node_ready.go:38] duration metric: took 31.010386914s waiting for node "multinode-291182-m02" to be "Ready" ...
	I1101 01:02:46.475598 1266961 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:02:46.475661 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1101 01:02:46.475671 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.475678 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.475685 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.479168 1266961 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1101 01:02:46.479187 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.479195 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.479202 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.479212 1266961 round_trippers.go:580]     Audit-Id: 27fc7b0c-abe0-4262-92e6-716360389afe
	I1101 01:02:46.479218 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.479224 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.479230 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.479827 1266961 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"530"},"items":[{"metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"441","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1101 01:02:46.482719 1266961 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-578kc" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.482815 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-578kc
	I1101 01:02:46.482872 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.482891 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.482900 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.485345 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.485369 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.485377 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.485384 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.485390 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.485397 1266961 round_trippers.go:580]     Audit-Id: c178c4a4-f314-4e40-b3de-dceaff2c721e
	I1101 01:02:46.485404 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.485410 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.485578 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-578kc","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"2f19e5cb-4b75-4e3e-a19b-280990e84437","resourceVersion":"441","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0c6132fd-2767-4767-b0e5-2d46bbd373bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c6132fd-2767-4767-b0e5-2d46bbd373bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1101 01:02:46.486079 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:46.486095 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.486103 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.486112 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.488128 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.488144 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.488152 1266961 round_trippers.go:580]     Audit-Id: 69dbab7f-a240-4bbd-8714-1b896e93c7f1
	I1101 01:02:46.488158 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.488164 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.488170 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.488179 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.488188 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.488436 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:46.488817 1266961 pod_ready.go:92] pod "coredns-5dd5756b68-578kc" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:46.488835 1266961 pod_ready.go:81] duration metric: took 6.087669ms waiting for pod "coredns-5dd5756b68-578kc" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.488847 1266961 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.488902 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-291182
	I1101 01:02:46.488911 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.488919 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.488926 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.491143 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.491198 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.491242 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.491269 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.491290 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.491306 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.491313 1266961 round_trippers.go:580]     Audit-Id: 6ee35ec6-b2a5-44d0-9fe6-008dbd3cea05
	I1101 01:02:46.491319 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.491405 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-291182","namespace":"kube-system","uid":"0a33ee34-33c0-4f59-9ae2-8ca35981deae","resourceVersion":"302","creationTimestamp":"2023-11-01T01:01:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"8fc77d08e73561102406304e326b0ada","kubernetes.io/config.mirror":"8fc77d08e73561102406304e326b0ada","kubernetes.io/config.seen":"2023-11-01T01:01:14.618392791Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1101 01:02:46.491828 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:46.491845 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.491854 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.491861 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.493835 1266961 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 01:02:46.493856 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.493863 1266961 round_trippers.go:580]     Audit-Id: 2c38c83a-dc13-49c5-9b27-98a790b77ef7
	I1101 01:02:46.493870 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.493878 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.493884 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.493893 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.493906 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.494009 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:46.494362 1266961 pod_ready.go:92] pod "etcd-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:46.494378 1266961 pod_ready.go:81] duration metric: took 5.521765ms waiting for pod "etcd-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.494394 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.494441 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-291182
	I1101 01:02:46.494451 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.494458 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.494465 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.496464 1266961 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 01:02:46.496540 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.496556 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.496564 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.496570 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.496576 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.496585 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.496591 1266961 round_trippers.go:580]     Audit-Id: 9d08a5f1-9e07-43de-b5f8-486f39674145
	I1101 01:02:46.496910 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-291182","namespace":"kube-system","uid":"da9644de-cf0b-493c-ad01-f81529c891f0","resourceVersion":"308","creationTimestamp":"2023-11-01T01:01:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"6322aef4132b8d2d236e2e4a9c7d6c71","kubernetes.io/config.mirror":"6322aef4132b8d2d236e2e4a9c7d6c71","kubernetes.io/config.seen":"2023-11-01T01:01:14.618398510Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1101 01:02:46.497438 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:46.497454 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.497463 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.497470 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.499463 1266961 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1101 01:02:46.499523 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.499566 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.499594 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.499617 1266961 round_trippers.go:580]     Audit-Id: 0b0d4d78-45dd-4a6e-ad6a-27ef2a817019
	I1101 01:02:46.499642 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.499672 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.499694 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.499837 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:46.500197 1266961 pod_ready.go:92] pod "kube-apiserver-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:46.500214 1266961 pod_ready.go:81] duration metric: took 5.813447ms waiting for pod "kube-apiserver-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.500226 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.500276 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-291182
	I1101 01:02:46.500286 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.500294 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.500301 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.502587 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.502639 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.502655 1266961 round_trippers.go:580]     Audit-Id: 6d61ce58-6a44-4849-afa0-c834351547a9
	I1101 01:02:46.502663 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.502669 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.502675 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.502682 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.502690 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.502843 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-291182","namespace":"kube-system","uid":"46a662c3-7497-451d-a776-3070e248ea1f","resourceVersion":"309","creationTimestamp":"2023-11-01T01:01:12Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"815d9bd2feb7a98efe3748f3c66837bf","kubernetes.io/config.mirror":"815d9bd2feb7a98efe3748f3c66837bf","kubernetes.io/config.seen":"2023-11-01T01:01:06.872528900Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1101 01:02:46.503348 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:46.503362 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.503371 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.503379 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.505672 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.505730 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.505752 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.505765 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.505775 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.505782 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.505800 1266961 round_trippers.go:580]     Audit-Id: 9f146406-8427-46c4-8c70-08df7ccc58bb
	I1101 01:02:46.505812 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.505927 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:46.506374 1266961 pod_ready.go:92] pod "kube-controller-manager-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:46.506391 1266961 pod_ready.go:81] duration metric: took 6.158027ms waiting for pod "kube-controller-manager-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.506402 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4bhsv" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.672753 1266961 request.go:629] Waited for 166.245374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bhsv
	I1101 01:02:46.672836 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4bhsv
	I1101 01:02:46.672848 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.672858 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.672882 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.675412 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.675443 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.675452 1266961 round_trippers.go:580]     Audit-Id: 5857824e-f970-44f5-b211-0b3947d1346f
	I1101 01:02:46.675459 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.675479 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.675492 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.675498 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.675505 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.675694 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4bhsv","generateName":"kube-proxy-","namespace":"kube-system","uid":"cb43c550-6b8d-47a5-9708-d0ca0f1e4d72","resourceVersion":"495","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"783f287e-71d3-45d2-84c3-165b969914ad","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"783f287e-71d3-45d2-84c3-165b969914ad\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1101 01:02:46.873500 1266961 request.go:629] Waited for 197.316807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:46.873584 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182-m02
	I1101 01:02:46.873594 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:46.873604 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:46.873616 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:46.876083 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:46.876140 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:46.876164 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:46.876189 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:46 GMT
	I1101 01:02:46.876225 1266961 round_trippers.go:580]     Audit-Id: 7f798126-a6b3-4ef4-8952-aba9e9fb7459
	I1101 01:02:46.876238 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:46.876245 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:46.876251 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:46.876371 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182-m02","uid":"32feddc4-3a14-4f29-a857-dc80c4df65f6","resourceVersion":"530","creationTimestamp":"2023-11-01T01:02:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:02:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5258 chars]
	I1101 01:02:46.876747 1266961 pod_ready.go:92] pod "kube-proxy-4bhsv" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:46.876764 1266961 pod_ready.go:81] duration metric: took 370.351277ms waiting for pod "kube-proxy-4bhsv" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:46.876774 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-895f8" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:47.073071 1266961 request.go:629] Waited for 196.232398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-895f8
	I1101 01:02:47.073174 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-895f8
	I1101 01:02:47.073185 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:47.073195 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:47.073202 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:47.075874 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:47.075930 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:47.075961 1266961 round_trippers.go:580]     Audit-Id: 4f0a50ed-52e2-4400-8cb9-5bdd8fb3eb75
	I1101 01:02:47.075974 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:47.075981 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:47.075987 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:47.075993 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:47.075999 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:47 GMT
	I1101 01:02:47.076108 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-895f8","generateName":"kube-proxy-","namespace":"kube-system","uid":"e98c65c1-d3f2-424e-a05f-652d660bff7b","resourceVersion":"412","creationTimestamp":"2023-11-01T01:01:27Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"783f287e-71d3-45d2-84c3-165b969914ad","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"783f287e-71d3-45d2-84c3-165b969914ad\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1101 01:02:47.272918 1266961 request.go:629] Waited for 196.315524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:47.272993 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:47.273004 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:47.273015 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:47.273026 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:47.275396 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:47.275428 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:47.275437 1266961 round_trippers.go:580]     Audit-Id: f88916f5-a9da-4c1d-a3d9-b903fff6446a
	I1101 01:02:47.275489 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:47.275508 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:47.275515 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:47.275521 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:47.275532 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:47 GMT
	I1101 01:02:47.275646 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:47.276050 1266961 pod_ready.go:92] pod "kube-proxy-895f8" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:47.276067 1266961 pod_ready.go:81] duration metric: took 399.285205ms waiting for pod "kube-proxy-895f8" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:47.276077 1266961 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:47.473082 1266961 request.go:629] Waited for 196.935393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-291182
	I1101 01:02:47.473188 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-291182
	I1101 01:02:47.473198 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:47.473235 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:47.473249 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:47.475827 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:47.475890 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:47.475914 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:47.475940 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:47 GMT
	I1101 01:02:47.475979 1266961 round_trippers.go:580]     Audit-Id: 726a360e-93b7-4354-a0d9-0e49a2eced1f
	I1101 01:02:47.476006 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:47.476029 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:47.476066 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:47.476229 1266961 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-291182","namespace":"kube-system","uid":"713ae672-bf7e-4ea7-993e-cf425aa2e548","resourceVersion":"304","creationTimestamp":"2023-11-01T01:01:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"92e0b369f3b6f7205d52c0c90e29d288","kubernetes.io/config.mirror":"92e0b369f3b6f7205d52c0c90e29d288","kubernetes.io/config.seen":"2023-11-01T01:01:14.618400766Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-01T01:01:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1101 01:02:47.673025 1266961 request.go:629] Waited for 196.342896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:47.673144 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-291182
	I1101 01:02:47.673161 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:47.673170 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:47.673178 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:47.675862 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:47.675933 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:47.675971 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:47.675984 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:47.675991 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:47 GMT
	I1101 01:02:47.675997 1266961 round_trippers.go:580]     Audit-Id: 4a4bdb9f-6b9c-423e-a31d-7986fcce3bdc
	I1101 01:02:47.676003 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:47.676009 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:47.676133 1266961 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update"
,"apiVersion":"v1","time":"2023-11-01T01:01:11Z","fieldsType":"FieldsV1 [truncated 6036 chars]
	I1101 01:02:47.676582 1266961 pod_ready.go:92] pod "kube-scheduler-multinode-291182" in "kube-system" namespace has status "Ready":"True"
	I1101 01:02:47.676601 1266961 pod_ready.go:81] duration metric: took 400.512938ms waiting for pod "kube-scheduler-multinode-291182" in "kube-system" namespace to be "Ready" ...
	I1101 01:02:47.676613 1266961 pod_ready.go:38] duration metric: took 1.201000696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1101 01:02:47.676651 1266961 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 01:02:47.676718 1266961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:02:47.692131 1266961 system_svc.go:56] duration metric: took 15.466773ms WaitForService to wait for kubelet.
	I1101 01:02:47.692161 1266961 kubeadm.go:581] duration metric: took 32.246830243s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1101 01:02:47.692188 1266961 node_conditions.go:102] verifying NodePressure condition ...
	I1101 01:02:47.872511 1266961 request.go:629] Waited for 180.23117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1101 01:02:47.872568 1266961 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1101 01:02:47.872574 1266961 round_trippers.go:469] Request Headers:
	I1101 01:02:47.872583 1266961 round_trippers.go:473]     Accept: application/json, */*
	I1101 01:02:47.872620 1266961 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1101 01:02:47.875129 1266961 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1101 01:02:47.875203 1266961 round_trippers.go:577] Response Headers:
	I1101 01:02:47.875220 1266961 round_trippers.go:580]     Audit-Id: 51d0c44a-6003-4e14-9966-38efd1939686
	I1101 01:02:47.875227 1266961 round_trippers.go:580]     Cache-Control: no-cache, private
	I1101 01:02:47.875233 1266961 round_trippers.go:580]     Content-Type: application/json
	I1101 01:02:47.875240 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 322b0c66-eacc-440f-b265-b07f548633e1
	I1101 01:02:47.875246 1266961 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 60dd186d-b390-4e45-b588-1dbd6dba0a3f
	I1101 01:02:47.875265 1266961 round_trippers.go:580]     Date: Wed, 01 Nov 2023 01:02:47 GMT
	I1101 01:02:47.875447 1266961 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"multinode-291182","uid":"1121bfdd-a82e-4f29-a8cc-bff7c284065c","resourceVersion":"425","creationTimestamp":"2023-11-01T01:01:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-291182","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b028b5849b88a3a572330fa0732896149c4085a9","minikube.k8s.io/name":"multinode-291182","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_01T01_01_15_0700","minikube.k8s.io/version":"v1.32.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manage
dFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1"," [truncated 12339 chars]
	I1101 01:02:47.876081 1266961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 01:02:47.876101 1266961 node_conditions.go:123] node cpu capacity is 2
	I1101 01:02:47.876111 1266961 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1101 01:02:47.876116 1266961 node_conditions.go:123] node cpu capacity is 2
	I1101 01:02:47.876121 1266961 node_conditions.go:105] duration metric: took 183.927428ms to run NodePressure ...
	I1101 01:02:47.876135 1266961 start.go:228] waiting for startup goroutines ...
	I1101 01:02:47.876163 1266961 start.go:242] writing updated cluster config ...
	I1101 01:02:47.876456 1266961 ssh_runner.go:195] Run: rm -f paused
	I1101 01:02:47.941525 1266961 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1101 01:02:47.943761 1266961 out.go:177] * Done! kubectl is now configured to use "multinode-291182" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 01 01:01:58 multinode-291182 crio[896]: time="2023-11-01 01:01:58.572534289Z" level=info msg="Starting container: a03daa9b19c5438646a3bf8141ac01e358ce02cd0572250df87c2062c5333aab" id=f63d886c-c443-46ac-b297-a23f87940db5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 01:01:58 multinode-291182 crio[896]: time="2023-11-01 01:01:58.580308382Z" level=info msg="Created container 1665d62cf3f11e886c4ea23a2bba3f6e66f751f544c964d0f3d3f5dd795db263: kube-system/coredns-5dd5756b68-578kc/coredns" id=2f8b244e-0fd2-41b6-af6a-0580f1561122 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 01:01:58 multinode-291182 crio[896]: time="2023-11-01 01:01:58.581169119Z" level=info msg="Starting container: 1665d62cf3f11e886c4ea23a2bba3f6e66f751f544c964d0f3d3f5dd795db263" id=8e7a5149-89a5-47ea-a0e7-c6fdb4e9f1d2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 01:01:58 multinode-291182 crio[896]: time="2023-11-01 01:01:58.593536777Z" level=info msg="Started container" PID=1923 containerID=a03daa9b19c5438646a3bf8141ac01e358ce02cd0572250df87c2062c5333aab description=kube-system/storage-provisioner/storage-provisioner id=f63d886c-c443-46ac-b297-a23f87940db5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ed481c0adc5bbe9a69aa2550cf4a85f7e6d1f62e126a01b5a174925fa30e554
	Nov 01 01:01:58 multinode-291182 crio[896]: time="2023-11-01 01:01:58.598270617Z" level=info msg="Started container" PID=1931 containerID=1665d62cf3f11e886c4ea23a2bba3f6e66f751f544c964d0f3d3f5dd795db263 description=kube-system/coredns-5dd5756b68-578kc/coredns id=8e7a5149-89a5-47ea-a0e7-c6fdb4e9f1d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1690a644237b22ab2c3550c3bbb55364d2665484386f5193363df80ad3cc4d86
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.193849582Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-2p499/POD" id=fb20c1cb-dfc4-4c87-bee7-957c4009d0e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.193914919Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.210411168Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-2p499 Namespace:default ID:cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2 UID:6e0e992e-39e9-46ac-a461-d16ffa8ffbd8 NetNS:/var/run/netns/3116bd52-0dae-4301-a680-c4539e15b494 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.210450552Z" level=info msg="Adding pod default_busybox-5bc68d56bd-2p499 to CNI network \"kindnet\" (type=ptp)"
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.222608536Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-2p499 Namespace:default ID:cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2 UID:6e0e992e-39e9-46ac-a461-d16ffa8ffbd8 NetNS:/var/run/netns/3116bd52-0dae-4301-a680-c4539e15b494 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.222754940Z" level=info msg="Checking pod default_busybox-5bc68d56bd-2p499 for CNI network kindnet (type=ptp)"
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.239184637Z" level=info msg="Ran pod sandbox cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2 with infra container: default/busybox-5bc68d56bd-2p499/POD" id=fb20c1cb-dfc4-4c87-bee7-957c4009d0e1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.245546454Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1acb9b52-9357-42b3-b543-3fa243f626f4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.245770092Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1acb9b52-9357-42b3-b543-3fa243f626f4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.246735518Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=dfc24469-fadd-469f-b510-3c3c1dc7b9f8 name=/runtime.v1.ImageService/PullImage
	Nov 01 01:02:49 multinode-291182 crio[896]: time="2023-11-01 01:02:49.247954801Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 01 01:02:50 multinode-291182 crio[896]: time="2023-11-01 01:02:50.222933803Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.477469916Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=dfc24469-fadd-469f-b510-3c3c1dc7b9f8 name=/runtime.v1.ImageService/PullImage
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.482090369Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=471ba7b6-a77b-45ba-b761-02327b04ea6f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.483684321Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=471ba7b6-a77b-45ba-b761-02327b04ea6f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.485340401Z" level=info msg="Creating container: default/busybox-5bc68d56bd-2p499/busybox" id=d2f729f2-8961-4deb-a98b-c3d3eee7d3a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.485485401Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.596548450Z" level=info msg="Created container 75c796b38608ee8b062a3c1053aaca88e5a0da0849922a1c64df6246730ec269: default/busybox-5bc68d56bd-2p499/busybox" id=d2f729f2-8961-4deb-a98b-c3d3eee7d3a0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.597428067Z" level=info msg="Starting container: 75c796b38608ee8b062a3c1053aaca88e5a0da0849922a1c64df6246730ec269" id=590dabfe-73fc-43b1-b652-a27852ed13da name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 01:02:51 multinode-291182 crio[896]: time="2023-11-01 01:02:51.612597981Z" level=info msg="Started container" PID=2078 containerID=75c796b38608ee8b062a3c1053aaca88e5a0da0849922a1c64df6246730ec269 description=default/busybox-5bc68d56bd-2p499/busybox id=590dabfe-73fc-43b1-b652-a27852ed13da name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	75c796b38608e       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   4 seconds ago        Running             busybox                   0                   cfe5eaee4b595       busybox-5bc68d56bd-2p499
	1665d62cf3f11       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   1690a644237b2       coredns-5dd5756b68-578kc
	a03daa9b19c54       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   7ed481c0adc5b       storage-provisioner
	b4ba671a92e13       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                      About a minute ago   Running             kube-proxy                0                   b3ac83b5e8392       kube-proxy-895f8
	23237fec6da34       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   5d08e02bb1202       kindnet-rlzpj
	4673cd905a4b9       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                      About a minute ago   Running             kube-controller-manager   0                   c2bed50163cf1       kube-controller-manager-multinode-291182
	6179ef1243c86       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                      About a minute ago   Running             kube-apiserver            0                   386487cfbae78       kube-apiserver-multinode-291182
	7d28251cb9795       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                      About a minute ago   Running             kube-scheduler            0                   6d09c1f6634a7       kube-scheduler-multinode-291182
	51ebc5f9ce2c4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   aad2a26049dfd       etcd-multinode-291182
	
	* 
	* ==> coredns [1665d62cf3f11e886c4ea23a2bba3f6e66f751f544c964d0f3d3f5dd795db263] <==
	* [INFO] 10.244.0.3:45325 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146346s
	[INFO] 10.244.1.2:40612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129517s
	[INFO] 10.244.1.2:53192 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003476181s
	[INFO] 10.244.1.2:49963 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090396s
	[INFO] 10.244.1.2:45219 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060956s
	[INFO] 10.244.1.2:40431 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001484726s
	[INFO] 10.244.1.2:50985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064664s
	[INFO] 10.244.1.2:47864 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055901s
	[INFO] 10.244.1.2:53805 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057673s
	[INFO] 10.244.0.3:57078 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119113s
	[INFO] 10.244.0.3:47667 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074396s
	[INFO] 10.244.0.3:60048 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067725s
	[INFO] 10.244.0.3:51042 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084972s
	[INFO] 10.244.1.2:39804 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010249s
	[INFO] 10.244.1.2:46572 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087359s
	[INFO] 10.244.1.2:44366 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076669s
	[INFO] 10.244.1.2:55046 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079433s
	[INFO] 10.244.0.3:34806 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138748s
	[INFO] 10.244.0.3:49981 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166752s
	[INFO] 10.244.0.3:41092 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107774s
	[INFO] 10.244.0.3:42690 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000157193s
	[INFO] 10.244.1.2:50438 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102063s
	[INFO] 10.244.1.2:59103 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000067881s
	[INFO] 10.244.1.2:53853 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000073936s
	[INFO] 10.244.1.2:35066 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008192s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-291182
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-291182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b028b5849b88a3a572330fa0732896149c4085a9
	                    minikube.k8s.io/name=multinode-291182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_01T01_01_15_0700
	                    minikube.k8s.io/version=v1.32.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:01:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-291182
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:02:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:01:58 +0000   Wed, 01 Nov 2023 01:01:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:01:58 +0000   Wed, 01 Nov 2023 01:01:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:01:58 +0000   Wed, 01 Nov 2023 01:01:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:01:58 +0000   Wed, 01 Nov 2023 01:01:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-291182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cfaf6e756d2d4acbb092ab545975e8e2
	  System UUID:                42a77b17-878d-46c2-a1a9-f736102c563d
	  Boot ID:                    11045d5e-2454-4ceb-8984-3078b90f4cad
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2p499                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-578kc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     89s
	  kube-system                 etcd-multinode-291182                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         102s
	  kube-system                 kindnet-rlzpj                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      89s
	  kube-system                 kube-apiserver-multinode-291182             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 kube-controller-manager-multinode-291182    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-895f8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-multinode-291182             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x8 over 110s)  kubelet          Node multinode-291182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x8 over 110s)  kubelet          Node multinode-291182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x8 over 110s)  kubelet          Node multinode-291182 status is now: NodeHasSufficientPID
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s                 kubelet          Node multinode-291182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s                 kubelet          Node multinode-291182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node multinode-291182 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-291182 event: Registered Node multinode-291182 in Controller
	  Normal  NodeReady                58s                  kubelet          Node multinode-291182 status is now: NodeReady
	
	
	Name:               multinode-291182-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-291182-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Nov 2023 01:02:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-291182-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Nov 2023 01:02:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Nov 2023 01:02:45 +0000   Wed, 01 Nov 2023 01:02:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Nov 2023 01:02:45 +0000   Wed, 01 Nov 2023 01:02:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Nov 2023 01:02:45 +0000   Wed, 01 Nov 2023 01:02:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Nov 2023 01:02:45 +0000   Wed, 01 Nov 2023 01:02:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-291182-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 271943a8d0a34d58a8571dffbb1491b4
	  System UUID:                c2824abb-e9fd-4c38-93e3-6aa7085707f9
	  Boot ID:                    11045d5e-2454-4ceb-8984-3078b90f4cad
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-7m7pb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-vbk95               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-4bhsv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  42s (x5 over 44s)  kubelet          Node multinode-291182-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x5 over 44s)  kubelet          Node multinode-291182-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x5 over 44s)  kubelet          Node multinode-291182-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-291182-m02 event: Registered Node multinode-291182-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-291182-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001083] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000767] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001031] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001063] FS-Cache: N-key=[8] '70643b0000000000'
	[  +0.004430] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000527cc4c3
	[  +0.001080] FS-Cache: O-key=[8] '70643b0000000000'
	[  +0.000717] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000008a5a3042
	[  +0.001070] FS-Cache: N-key=[8] '70643b0000000000'
	[  +2.029136] FS-Cache: Duplicate cookie detected
	[  +0.000790] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001008] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=00000000d9fe484b
	[  +0.001140] FS-Cache: O-key=[8] '6f643b0000000000'
	[  +0.000721] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=000000004e2890c8
	[  +0.001074] FS-Cache: N-key=[8] '6f643b0000000000'
	[  +0.310063] FS-Cache: Duplicate cookie detected
	[  +0.000725] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001019] FS-Cache: O-cookie d=000000004aa3546a{9p.inode} n=000000005bafb08b
	[  +0.001102] FS-Cache: O-key=[8] '75643b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=000000004aa3546a{9p.inode} n=00000000763bdf7d
	[  +0.001071] FS-Cache: N-key=[8] '75643b0000000000'
	
	* 
	* ==> etcd [51ebc5f9ce2c4f452281386d3a089444d77ca0acd5ceaf4976ee845388deb7e2] <==
	* {"level":"info","ts":"2023-11-01T01:01:07.697942Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-01T01:01:07.699002Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b2c6679ac05f2cf1","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-11-01T01:01:07.699964Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T01:01:07.700198Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T01:01:07.700277Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-01T01:01:07.700462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-01T01:01:07.700564Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-01T01:01:08.676798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-01T01:01:08.676848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-01T01:01:08.676874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-01T01:01:08.676887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-01T01:01:08.676894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-01T01:01:08.676906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-01T01:01:08.676913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-01T01:01:08.678019Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-291182 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-01T01:01:08.678081Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:01:08.678164Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-01T01:01:08.679187Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-01T01:01:08.679351Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:01:08.68028Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:01:08.680427Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:01:08.689357Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-01T01:01:08.680601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-01T01:01:08.683525Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-01T01:01:08.703207Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  01:02:56 up  8:45,  0 users,  load average: 1.09, 1.58, 1.44
	Linux multinode-291182 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [23237fec6da342c99708d227656c7130fa013298d5e11fe073c39b2ae69cd2a0] <==
	* I1101 01:01:27.618854       1 main.go:116] setting mtu 1500 for CNI 
	I1101 01:01:27.618873       1 main.go:146] kindnetd IP family: "ipv4"
	I1101 01:01:27.618884       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1101 01:01:57.865472       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1101 01:01:57.879069       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1101 01:01:57.879100       1 main.go:227] handling current node
	I1101 01:02:07.896434       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1101 01:02:07.896467       1 main.go:227] handling current node
	I1101 01:02:17.900420       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1101 01:02:17.900451       1 main.go:227] handling current node
	I1101 01:02:17.900461       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1101 01:02:17.900467       1 main.go:250] Node multinode-291182-m02 has CIDR [10.244.1.0/24] 
	I1101 01:02:17.900630       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1101 01:02:27.912947       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1101 01:02:27.912977       1 main.go:227] handling current node
	I1101 01:02:27.913059       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1101 01:02:27.913069       1 main.go:250] Node multinode-291182-m02 has CIDR [10.244.1.0/24] 
	I1101 01:02:37.923471       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1101 01:02:37.923498       1 main.go:227] handling current node
	I1101 01:02:37.923508       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1101 01:02:37.923514       1 main.go:250] Node multinode-291182-m02 has CIDR [10.244.1.0/24] 
	I1101 01:02:47.937444       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1101 01:02:47.937477       1 main.go:227] handling current node
	I1101 01:02:47.937488       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1101 01:02:47.937494       1 main.go:250] Node multinode-291182-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6179ef1243c86f08394db15efe63b5ef66d3bf8a51e0edc60ff26943df68739d] <==
	* I1101 01:01:11.213244       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 01:01:11.290906       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 01:01:11.304790       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 01:01:11.305526       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 01:01:11.306196       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 01:01:11.306225       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 01:01:11.307035       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 01:01:11.331903       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 01:01:12.109789       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 01:01:12.116873       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 01:01:12.116896       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 01:01:12.610574       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 01:01:12.649697       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 01:01:12.717627       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 01:01:12.724391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1101 01:01:12.725412       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 01:01:12.729145       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 01:01:13.176428       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 01:01:14.554593       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 01:01:14.573245       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 01:01:14.588051       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 01:01:26.328257       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 01:01:26.961951       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1101 01:02:51.970632       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400c992690), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400c8beff0), ResponseWriter:(*httpsnoop.rw)(0x400c8beff0), Flusher:(*httpsnoop.rw)(0x400c8beff0), CloseNotifier:(*httpsnoop.rw)(0x400c8beff0), Pusher:(*httpsnoop.rw)(0x400c8beff0)}}, encoder:(*versioning.codec)(0x400c48fc20), memAllocator:(*runtime.Allocator)(0x400c452978)})
	E1101 01:02:52.580490       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:56138: write: broken pipe
	
	* 
	* ==> kube-controller-manager [4673cd905a4b9fdafda174c53110205017126bf28f73d938077db1d07f1fc5b6] <==
	* I1101 01:01:58.098288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="227.223µs"
	I1101 01:01:58.112917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.957µs"
	I1101 01:01:58.883124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.555199ms"
	I1101 01:01:58.883272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.695µs"
	I1101 01:02:01.155523       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1101 01:02:14.239572       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-291182-m02\" does not exist"
	I1101 01:02:14.272561       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-291182-m02" podCIDRs=["10.244.1.0/24"]
	I1101 01:02:14.275437       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4bhsv"
	I1101 01:02:14.275473       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vbk95"
	I1101 01:02:16.157530       1 event.go:307] "Event occurred" object="multinode-291182-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-291182-m02 event: Registered Node multinode-291182-m02 in Controller"
	I1101 01:02:16.157684       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-291182-m02"
	I1101 01:02:45.983476       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-291182-m02"
	I1101 01:02:48.814967       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1101 01:02:48.828147       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-7m7pb"
	I1101 01:02:48.845640       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2p499"
	I1101 01:02:48.871657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="56.526136ms"
	I1101 01:02:48.886296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.563747ms"
	I1101 01:02:48.887309       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.934µs"
	I1101 01:02:48.896774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.6µs"
	I1101 01:02:48.905070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.821µs"
	I1101 01:02:48.919207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.421µs"
	I1101 01:02:51.910596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.559932ms"
	I1101 01:02:51.910797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="63.327µs"
	I1101 01:02:51.963619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.545338ms"
	I1101 01:02:51.963690       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.97µs"
	
	* 
	* ==> kube-proxy [b4ba671a92e1380608915bb26eedf080c7604ec5f430fad53202502af9eccb03] <==
	* I1101 01:01:27.618388       1 server_others.go:69] "Using iptables proxy"
	I1101 01:01:27.634045       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1101 01:01:27.658371       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 01:01:27.660446       1 server_others.go:152] "Using iptables Proxier"
	I1101 01:01:27.660480       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 01:01:27.660487       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 01:01:27.660560       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 01:01:27.660804       1 server.go:846] "Version info" version="v1.28.3"
	I1101 01:01:27.660819       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 01:01:27.661798       1 config.go:188] "Starting service config controller"
	I1101 01:01:27.661928       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 01:01:27.661984       1 config.go:97] "Starting endpoint slice config controller"
	I1101 01:01:27.662027       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 01:01:27.662575       1 config.go:315] "Starting node config controller"
	I1101 01:01:27.664259       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 01:01:27.762410       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 01:01:27.762429       1 shared_informer.go:318] Caches are synced for service config
	I1101 01:01:27.764526       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7d28251cb97953c896da735c9ba0a890af77d8bfc2e6516e3af2997dc93d27e6] <==
	* W1101 01:01:11.272473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 01:01:11.273065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 01:01:11.277453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:01:11.277540       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 01:01:11.280708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1101 01:01:11.280817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1101 01:01:11.281855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:01:11.281884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:01:11.281942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 01:01:11.281958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 01:01:11.281948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1101 01:01:11.282056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1101 01:01:11.282012       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 01:01:11.282150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 01:01:12.136655       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 01:01:12.136789       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 01:01:12.152158       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 01:01:12.152197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 01:01:12.211136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 01:01:12.211174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 01:01:12.277214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 01:01:12.277317       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 01:01:12.366343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 01:01:12.366444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1101 01:01:14.460735       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.103957    1388 topology_manager.go:215] "Topology Admit Handler" podUID="66913683-459b-404f-b453-48bccb6ebbdb" podNamespace="kube-system" podName="kindnet-rlzpj"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220550    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66913683-459b-404f-b453-48bccb6ebbdb-xtables-lock\") pod \"kindnet-rlzpj\" (UID: \"66913683-459b-404f-b453-48bccb6ebbdb\") " pod="kube-system/kindnet-rlzpj"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220609    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpdp\" (UniqueName: \"kubernetes.io/projected/66913683-459b-404f-b453-48bccb6ebbdb-kube-api-access-6jpdp\") pod \"kindnet-rlzpj\" (UID: \"66913683-459b-404f-b453-48bccb6ebbdb\") " pod="kube-system/kindnet-rlzpj"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220638    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e98c65c1-d3f2-424e-a05f-652d660bff7b-lib-modules\") pod \"kube-proxy-895f8\" (UID: \"e98c65c1-d3f2-424e-a05f-652d660bff7b\") " pod="kube-system/kube-proxy-895f8"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220662    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdkcl\" (UniqueName: \"kubernetes.io/projected/e98c65c1-d3f2-424e-a05f-652d660bff7b-kube-api-access-zdkcl\") pod \"kube-proxy-895f8\" (UID: \"e98c65c1-d3f2-424e-a05f-652d660bff7b\") " pod="kube-system/kube-proxy-895f8"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220684    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66913683-459b-404f-b453-48bccb6ebbdb-lib-modules\") pod \"kindnet-rlzpj\" (UID: \"66913683-459b-404f-b453-48bccb6ebbdb\") " pod="kube-system/kindnet-rlzpj"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220711    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e98c65c1-d3f2-424e-a05f-652d660bff7b-kube-proxy\") pod \"kube-proxy-895f8\" (UID: \"e98c65c1-d3f2-424e-a05f-652d660bff7b\") " pod="kube-system/kube-proxy-895f8"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220740    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/66913683-459b-404f-b453-48bccb6ebbdb-cni-cfg\") pod \"kindnet-rlzpj\" (UID: \"66913683-459b-404f-b453-48bccb6ebbdb\") " pod="kube-system/kindnet-rlzpj"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.220763    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e98c65c1-d3f2-424e-a05f-652d660bff7b-xtables-lock\") pod \"kube-proxy-895f8\" (UID: \"e98c65c1-d3f2-424e-a05f-652d660bff7b\") " pod="kube-system/kube-proxy-895f8"
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: W1101 01:01:27.433934    1388 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/crio-b3ac83b5e839271914a9ae167fccd6bf2ab548804f7b63c8032dd770f0050706 WatchSource:0}: Error finding container b3ac83b5e839271914a9ae167fccd6bf2ab548804f7b63c8032dd770f0050706: Status 404 returned error can't find the container with id b3ac83b5e839271914a9ae167fccd6bf2ab548804f7b63c8032dd770f0050706
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: W1101 01:01:27.434703    1388 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/crio-5d08e02bb12029e8efc4e51c120442cc9ce50e029cf52cc65e9ee4eb9d86981d WatchSource:0}: Error finding container 5d08e02bb12029e8efc4e51c120442cc9ce50e029cf52cc65e9ee4eb9d86981d: Status 404 returned error can't find the container with id 5d08e02bb12029e8efc4e51c120442cc9ce50e029cf52cc65e9ee4eb9d86981d
	Nov 01 01:01:27 multinode-291182 kubelet[1388]: I1101 01:01:27.828171    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-rlzpj" podStartSLOduration=0.828124373 podCreationTimestamp="2023-11-01 01:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 01:01:27.811839125 +0000 UTC m=+13.295312785" watchObservedRunningTime="2023-11-01 01:01:27.828124373 +0000 UTC m=+13.311598009"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.068931    1388 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.094122    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-895f8" podStartSLOduration=31.094079455 podCreationTimestamp="2023-11-01 01:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 01:01:27.832237778 +0000 UTC m=+13.315711422" watchObservedRunningTime="2023-11-01 01:01:58.094079455 +0000 UTC m=+43.577553083"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.094340    1388 topology_manager.go:215] "Topology Admit Handler" podUID="2f19e5cb-4b75-4e3e-a19b-280990e84437" podNamespace="kube-system" podName="coredns-5dd5756b68-578kc"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.099645    1388 topology_manager.go:215] "Topology Admit Handler" podUID="194ac2e0-8f59-49fb-9ede-086271776161" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.244474    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/194ac2e0-8f59-49fb-9ede-086271776161-tmp\") pod \"storage-provisioner\" (UID: \"194ac2e0-8f59-49fb-9ede-086271776161\") " pod="kube-system/storage-provisioner"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.244534    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f19e5cb-4b75-4e3e-a19b-280990e84437-config-volume\") pod \"coredns-5dd5756b68-578kc\" (UID: \"2f19e5cb-4b75-4e3e-a19b-280990e84437\") " pod="kube-system/coredns-5dd5756b68-578kc"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.244564    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b6gl\" (UniqueName: \"kubernetes.io/projected/194ac2e0-8f59-49fb-9ede-086271776161-kube-api-access-2b6gl\") pod \"storage-provisioner\" (UID: \"194ac2e0-8f59-49fb-9ede-086271776161\") " pod="kube-system/storage-provisioner"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.244594    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnnk\" (UniqueName: \"kubernetes.io/projected/2f19e5cb-4b75-4e3e-a19b-280990e84437-kube-api-access-gnnnk\") pod \"coredns-5dd5756b68-578kc\" (UID: \"2f19e5cb-4b75-4e3e-a19b-280990e84437\") " pod="kube-system/coredns-5dd5756b68-578kc"
	Nov 01 01:01:58 multinode-291182 kubelet[1388]: I1101 01:01:58.856808    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.85676454 podCreationTimestamp="2023-11-01 01:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 01:01:58.856399578 +0000 UTC m=+44.339873214" watchObservedRunningTime="2023-11-01 01:01:58.85676454 +0000 UTC m=+44.340238176"
	Nov 01 01:02:48 multinode-291182 kubelet[1388]: I1101 01:02:48.892140    1388 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-578kc" podStartSLOduration=81.892098374 podCreationTimestamp="2023-11-01 01:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-01 01:01:58.871280789 +0000 UTC m=+44.354754417" watchObservedRunningTime="2023-11-01 01:02:48.892098374 +0000 UTC m=+94.375572002"
	Nov 01 01:02:48 multinode-291182 kubelet[1388]: I1101 01:02:48.892467    1388 topology_manager.go:215] "Topology Admit Handler" podUID="6e0e992e-39e9-46ac-a461-d16ffa8ffbd8" podNamespace="default" podName="busybox-5bc68d56bd-2p499"
	Nov 01 01:02:49 multinode-291182 kubelet[1388]: I1101 01:02:49.039509    1388 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brlf\" (UniqueName: \"kubernetes.io/projected/6e0e992e-39e9-46ac-a461-d16ffa8ffbd8-kube-api-access-6brlf\") pod \"busybox-5bc68d56bd-2p499\" (UID: \"6e0e992e-39e9-46ac-a461-d16ffa8ffbd8\") " pod="default/busybox-5bc68d56bd-2p499"
	Nov 01 01:02:49 multinode-291182 kubelet[1388]: W1101 01:02:49.237888    1388 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/crio-cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2 WatchSource:0}: Error finding container cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2: Status 404 returned error can't find the container with id cfe5eaee4b59538370e9a7a63605e6f5493d4638c61320e1b01d5d1e63479eb2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-291182 -n multinode-291182
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-291182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1219184108.exe start -p running-upgrade-788193 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1101 01:17:55.882174 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1219184108.exe start -p running-upgrade-788193 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m3.003753153s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-788193 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-788193 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.240175543s)

                                                
                                                
-- stdout --
	* [running-upgrade-788193] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-788193 in cluster running-upgrade-788193
	* Pulling base image ...
	* Updating the running docker "running-upgrade-788193" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 01:18:52.865845 1326758 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:18:52.866091 1326758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:18:52.866119 1326758 out.go:309] Setting ErrFile to fd 2...
	I1101 01:18:52.866162 1326758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:18:52.866448 1326758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:18:52.867017 1326758 out.go:303] Setting JSON to false
	I1101 01:18:52.868147 1326758 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32480,"bootTime":1698769053,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 01:18:52.868244 1326758 start.go:138] virtualization:  
	I1101 01:18:52.872462 1326758 out.go:177] * [running-upgrade-788193] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 01:18:52.874968 1326758 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:18:52.876873 1326758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:18:52.875119 1326758 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1101 01:18:52.875157 1326758 notify.go:220] Checking for updates...
	I1101 01:18:52.880108 1326758 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:18:52.881846 1326758 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 01:18:52.883608 1326758 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 01:18:52.885718 1326758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:18:52.888222 1326758 config.go:182] Loaded profile config "running-upgrade-788193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:18:52.890563 1326758 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1101 01:18:52.892424 1326758 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:18:52.928923 1326758 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 01:18:52.929051 1326758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:18:53.038414 1326758 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-01 01:18:53.027514892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:18:53.038516 1326758 docker.go:295] overlay module found
	I1101 01:18:53.040650 1326758 out.go:177] * Using the docker driver based on existing profile
	I1101 01:18:53.042452 1326758 start.go:298] selected driver: docker
	I1101 01:18:53.042477 1326758 start.go:902] validating driver "docker" against &{Name:running-upgrade-788193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-788193 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.226 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 01:18:53.042580 1326758 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:18:53.043514 1326758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:18:53.067230 1326758 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1101 01:18:53.123891 1326758 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-01 01:18:53.113821365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:18:53.124294 1326758 cni.go:84] Creating CNI manager for ""
	I1101 01:18:53.124314 1326758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 01:18:53.124326 1326758 start_flags.go:323] config:
	{Name:running-upgrade-788193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-788193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.226 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 01:18:53.127552 1326758 out.go:177] * Starting control plane node running-upgrade-788193 in cluster running-upgrade-788193
	I1101 01:18:53.129445 1326758 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 01:18:53.131439 1326758 out.go:177] * Pulling base image ...
	I1101 01:18:53.133142 1326758 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1101 01:18:53.133230 1326758 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1101 01:18:53.150885 1326758 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1101 01:18:53.150913 1326758 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1101 01:18:53.204263 1326758 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1101 01:18:53.204422 1326758 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/running-upgrade-788193/config.json ...
	I1101 01:18:53.204542 1326758 cache.go:107] acquiring lock: {Name:mka89eb28dc72e1a46e6c55775643518cc76d2e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204625 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 01:18:53.204633 1326758 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.674µs
	I1101 01:18:53.204642 1326758 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 01:18:53.204652 1326758 cache.go:107] acquiring lock: {Name:mkcd0eb14775904e216368f5cf607d17446ff03c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204682 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1101 01:18:53.204682 1326758 cache.go:194] Successfully downloaded all kic artifacts
	I1101 01:18:53.204687 1326758 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.496µs
	I1101 01:18:53.204700 1326758 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1101 01:18:53.204703 1326758 start.go:365] acquiring machines lock for running-upgrade-788193: {Name:mkca78a542979d0e422ef2a320afaae599c8a8fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204709 1326758 cache.go:107] acquiring lock: {Name:mkce4c558234459005acad2f6e3084db5d193195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204739 1326758 start.go:369] acquired machines lock for "running-upgrade-788193" in 24.763µs
	I1101 01:18:53.204743 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1101 01:18:53.204752 1326758 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:18:53.204748 1326758 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 40.951µs
	I1101 01:18:53.204759 1326758 fix.go:54] fixHost starting: 
	I1101 01:18:53.204768 1326758 cache.go:107] acquiring lock: {Name:mk5c30858431dfdaab3ee3ccef6e6e01a4bd052f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204797 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1101 01:18:53.204802 1326758 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.905µs
	I1101 01:18:53.204808 1326758 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1101 01:18:53.204817 1326758 cache.go:107] acquiring lock: {Name:mk271c5f9ea5221d5c3ba6bd7ef149d160e54b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204841 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1101 01:18:53.204846 1326758 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 29.883µs
	I1101 01:18:53.204853 1326758 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1101 01:18:53.204861 1326758 cache.go:107] acquiring lock: {Name:mk25f668b9d26dbd5166e63a4b6fd4ebaa89c209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204885 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1101 01:18:53.204889 1326758 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.26µs
	I1101 01:18:53.204895 1326758 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1101 01:18:53.204903 1326758 cache.go:107] acquiring lock: {Name:mk1ea4c71835f0e9602ac80532fc95f154ffac3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204932 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1101 01:18:53.204937 1326758 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 35.34µs
	I1101 01:18:53.204943 1326758 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1101 01:18:53.204951 1326758 cache.go:107] acquiring lock: {Name:mk835580881936495bac751ee7b074f531992fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:18:53.204974 1326758 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1101 01:18:53.204979 1326758 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 28.931µs
	I1101 01:18:53.205016 1326758 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1101 01:18:53.204759 1326758 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1101 01:18:53.205024 1326758 cache.go:87] Successfully saved all images to host disk.
	I1101 01:18:53.205060 1326758 cli_runner.go:164] Run: docker container inspect running-upgrade-788193 --format={{.State.Status}}
	I1101 01:18:53.222496 1326758 fix.go:102] recreateIfNeeded on running-upgrade-788193: state=Running err=<nil>
	W1101 01:18:53.222522 1326758 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:18:53.225043 1326758 out.go:177] * Updating the running docker "running-upgrade-788193" container ...
	I1101 01:18:53.227034 1326758 machine.go:88] provisioning docker machine ...
	I1101 01:18:53.227086 1326758 ubuntu.go:169] provisioning hostname "running-upgrade-788193"
	I1101 01:18:53.227184 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:53.247526 1326758 main.go:141] libmachine: Using SSH client type: native
	I1101 01:18:53.248009 1326758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1101 01:18:53.248030 1326758 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-788193 && echo "running-upgrade-788193" | sudo tee /etc/hostname
	I1101 01:18:53.406077 1326758 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-788193
	
	I1101 01:18:53.406151 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:53.436765 1326758 main.go:141] libmachine: Using SSH client type: native
	I1101 01:18:53.437384 1326758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1101 01:18:53.437408 1326758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-788193' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-788193/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-788193' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:18:53.608064 1326758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:18:53.608090 1326758 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 01:18:53.608125 1326758 ubuntu.go:177] setting up certificates
	I1101 01:18:53.608135 1326758 provision.go:83] configureAuth start
	I1101 01:18:53.608209 1326758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-788193
	I1101 01:18:53.650403 1326758 provision.go:138] copyHostCerts
	I1101 01:18:53.650469 1326758 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 01:18:53.650506 1326758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:18:53.650589 1326758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 01:18:53.650701 1326758 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 01:18:53.650713 1326758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:18:53.650743 1326758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 01:18:53.650816 1326758 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 01:18:53.650826 1326758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:18:53.650854 1326758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 01:18:53.650911 1326758 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-788193 san=[192.168.70.226 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-788193]
	I1101 01:18:54.870572 1326758 provision.go:172] copyRemoteCerts
	I1101 01:18:54.870641 1326758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:18:54.870683 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:54.889844 1326758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/running-upgrade-788193/id_rsa Username:docker}
	I1101 01:18:54.990980 1326758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:18:55.016367 1326758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:18:55.043411 1326758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 01:18:55.067629 1326758 provision.go:86] duration metric: configureAuth took 1.459474698s
	I1101 01:18:55.067655 1326758 ubuntu.go:193] setting minikube options for container-runtime
	I1101 01:18:55.067875 1326758 config.go:182] Loaded profile config "running-upgrade-788193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:18:55.067974 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:55.087676 1326758 main.go:141] libmachine: Using SSH client type: native
	I1101 01:18:55.088092 1326758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34478 <nil> <nil>}
	I1101 01:18:55.088107 1326758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:18:55.665386 1326758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:18:55.665411 1326758 machine.go:91] provisioned docker machine in 2.438352243s
	I1101 01:18:55.665437 1326758 start.go:300] post-start starting for "running-upgrade-788193" (driver="docker")
	I1101 01:18:55.665451 1326758 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:18:55.665533 1326758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:18:55.665582 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:55.690473 1326758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/running-upgrade-788193/id_rsa Username:docker}
	I1101 01:18:55.799245 1326758 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:18:55.803258 1326758 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 01:18:55.803286 1326758 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 01:18:55.803325 1326758 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 01:18:55.803338 1326758 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1101 01:18:55.803349 1326758 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 01:18:55.803420 1326758 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 01:18:55.803502 1326758 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 01:18:55.803626 1326758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:18:55.812747 1326758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:18:55.838360 1326758 start.go:303] post-start completed in 172.905892ms
	I1101 01:18:55.838443 1326758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:18:55.838504 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:55.863612 1326758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/running-upgrade-788193/id_rsa Username:docker}
	I1101 01:18:55.961225 1326758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 01:18:55.967090 1326758 fix.go:56] fixHost completed within 2.762324424s
	I1101 01:18:55.967118 1326758 start.go:83] releasing machines lock for "running-upgrade-788193", held for 2.762367115s
	I1101 01:18:55.967194 1326758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-788193
	I1101 01:18:55.985367 1326758 ssh_runner.go:195] Run: cat /version.json
	I1101 01:18:55.985420 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:55.985673 1326758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:18:55.985724 1326758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-788193
	I1101 01:18:56.006205 1326758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/running-upgrade-788193/id_rsa Username:docker}
	I1101 01:18:56.007136 1326758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34478 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/running-upgrade-788193/id_rsa Username:docker}
	W1101 01:18:56.212748 1326758 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1101 01:18:56.212859 1326758 ssh_runner.go:195] Run: systemctl --version
	I1101 01:18:56.218312 1326758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:18:56.361963 1326758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 01:18:56.368048 1326758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:18:56.388477 1326758 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 01:18:56.388595 1326758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:18:56.418882 1326758 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:18:56.418945 1326758 start.go:472] detecting cgroup driver to use...
	I1101 01:18:56.418993 1326758 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 01:18:56.419069 1326758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:18:56.447071 1326758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:18:56.458990 1326758 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:18:56.459093 1326758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:18:56.471749 1326758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:18:56.485127 1326758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1101 01:18:56.498541 1326758 docker.go:214] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1101 01:18:56.498635 1326758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:18:56.645101 1326758 docker.go:220] disabling docker service ...
	I1101 01:18:56.645198 1326758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:18:56.659430 1326758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:18:56.672469 1326758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:18:56.819035 1326758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:18:56.967513 1326758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:18:56.982088 1326758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:18:56.999789 1326758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1101 01:18:56.999907 1326758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:18:57.015073 1326758 out.go:177] 
	W1101 01:18:57.017174 1326758 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1101 01:18:57.017365 1326758 out.go:239] * 
	* 
	W1101 01:18:57.018603 1326758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 01:18:57.021619 1326758 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-788193 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-01 01:18:57.06395112 +0000 UTC m=+2813.609986890
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-788193
helpers_test.go:235: (dbg) docker inspect running-upgrade-788193:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c67579ac484ff791dec6c788ad8f497d1018ef8ddc41229fca9ace99bfd7bf96",
	        "Created": "2023-11-01T01:18:07.381742608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1323327,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T01:18:07.792740096Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/c67579ac484ff791dec6c788ad8f497d1018ef8ddc41229fca9ace99bfd7bf96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c67579ac484ff791dec6c788ad8f497d1018ef8ddc41229fca9ace99bfd7bf96/hostname",
	        "HostsPath": "/var/lib/docker/containers/c67579ac484ff791dec6c788ad8f497d1018ef8ddc41229fca9ace99bfd7bf96/hosts",
	        "LogPath": "/var/lib/docker/containers/c67579ac484ff791dec6c788ad8f497d1018ef8ddc41229fca9ace99bfd7bf96/c67579ac484ff791dec6c788ad8f497d1018ef8ddc41229fca9ace99bfd7bf96-json.log",
	        "Name": "/running-upgrade-788193",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-788193:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-788193",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b0c01a07ad5cd944d7eab560dc8c3227372c59cb668a8299e60b69c31d43716b-init/diff:/var/lib/docker/overlay2/4deeb9ad97a978e44215957f0cdc691c12ba30d49682d14a7596f099db22cf5d/diff:/var/lib/docker/overlay2/109351bdb7509db71992e440a25d1f7233410b7c329c393eeae742a9e70cd7df/diff:/var/lib/docker/overlay2/4add0dda98a83957f984c2f8daf0bc5659792266d7fb3abc8df6102f3ad5a762/diff:/var/lib/docker/overlay2/11c869eb2bcd500211f63f53aed13e3ed0e1427873d4745d7a7aeb191abf7c30/diff:/var/lib/docker/overlay2/cb449fc3a76ffbb69d47ec7057ddb012e323fed073bfe2337307f3aaabd845ef/diff:/var/lib/docker/overlay2/8fa713f88ab565cfaf910098ffbd2ff871a01897d78d0ca27d71c702588618e6/diff:/var/lib/docker/overlay2/ec9850d2c2d9357d51b44cd159894e2cdd4d9e96f14e655b4ceb20cdb5adc9cd/diff:/var/lib/docker/overlay2/db96ba35671f07c2041d1f136cb97d92e9d26993865c5b7fbe4f2b9d363e0a6e/diff:/var/lib/docker/overlay2/b29a8e75a61c50dd3b97e0eb580dcfca0cc565fe0f84d6b6ff93bd72eae613a0/diff:/var/lib/docker/overlay2/fb3b60
276ca3c1c96c257e7940107f518701b47335f1c8791f58439bde6b0a5a/diff:/var/lib/docker/overlay2/9aa44568523ceb1a4f89ebd76480258c5083419798a5c4f60616aea24aa3ad64/diff:/var/lib/docker/overlay2/675516c2ef0ec4e501598e8e9cb11e6b5c3ac73515d238a1a5fad94297dc9013/diff:/var/lib/docker/overlay2/d5d4fb96182ae320f46da0127115271f95c44fbfedfe59bc4052fa38f15f7e32/diff:/var/lib/docker/overlay2/b991fb10928849aa1423df8f4f53fd2f87aac34843c7d9e00ebfc47fd5c570a9/diff:/var/lib/docker/overlay2/d09a56ee1f6ec357862277876d51da16d1f038a21e9385673064ed140ea487a9/diff:/var/lib/docker/overlay2/f414f4bec00be64944305b107fcef70fc1eea5d037c4ff6028922755e16da506/diff:/var/lib/docker/overlay2/af32866dd45e1ab125b132d9b0a6a84a83eca8b71caf1e4f9e4a2d9fa7ab8fb8/diff:/var/lib/docker/overlay2/6e459be98b46bfbc21c2d09b15923fe07aa14db3ce7183678bb420d421923b80/diff:/var/lib/docker/overlay2/ee04458ac155a9273b8e2120f8a46273ec79caf38c16f06b4619fcf4cf844344/diff:/var/lib/docker/overlay2/7d220da3c58397d7936d6c1a363032274aefdae461eedabb210100f47ca2fdfc/diff:/var/lib/d
ocker/overlay2/aae8d878c9c08286ba04cd4959779d6d10f6d621ffbfd33313c3b3d5678b0616/diff:/var/lib/docker/overlay2/55d8efaaafbb5ab632cdc5795429d2a36a8cf9aa3e194d2cadd036f3522ce772/diff:/var/lib/docker/overlay2/1c71c83cace6f0076098d03e301a49b2b087b88af06690e685cb83352ead9e2d/diff:/var/lib/docker/overlay2/f7f6c65cd4457e421734a23770c6c0e6be9c3ebd5d9da24a3e5bda7c6919da22/diff:/var/lib/docker/overlay2/b5d283313d6b9a53b997163cfa21be94a7abc49faf8ff91e2a767a5e881f6294/diff:/var/lib/docker/overlay2/7b19993c3307232ac5a3c8189c9e8d6fd412f7efc5135b50dd2d71b16db098e4/diff:/var/lib/docker/overlay2/f8d23aa0114cdeaa885e815735be71171338d14c4a121edc02531cea5f325998/diff:/var/lib/docker/overlay2/a8ba83ee93cb495ef40dd7dfa73b21f2644072e0f544d275dae2eed4da80e845/diff:/var/lib/docker/overlay2/fe5423e38df3feadc753f5889745b3335b38b9a3cf14b293a2c4f0995f3b8cbd/diff:/var/lib/docker/overlay2/8c8f9713dadaaf31731351aea78e24cff26fb616190e6f9537c4d3348ee60d17/diff:/var/lib/docker/overlay2/5fad31d1922617b84b3c609b7d4a0d2e20ea43516e55c3b79903ef78bc0
01abe/diff:/var/lib/docker/overlay2/5b11ca5aef83b057cec0efbb794c5112247793bc48695d9c71e79fcce017a446/diff:/var/lib/docker/overlay2/5687f070e92f825a553bffa781aba79232fc877989d591eef9702e3a9c4bdeb2/diff:/var/lib/docker/overlay2/620bef9dbd4b2b357afbb20a28e4d3802385bd557674c2503cb449fe91eab73d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b0c01a07ad5cd944d7eab560dc8c3227372c59cb668a8299e60b69c31d43716b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b0c01a07ad5cd944d7eab560dc8c3227372c59cb668a8299e60b69c31d43716b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b0c01a07ad5cd944d7eab560dc8c3227372c59cb668a8299e60b69c31d43716b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-788193",
	                "Source": "/var/lib/docker/volumes/running-upgrade-788193/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-788193",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-788193",
	                "name.minikube.sigs.k8s.io": "running-upgrade-788193",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c867bf82442adcee74250eb3072052348401d130cb226e6ba2f4c21fb4f1707",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34476"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34475"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0c867bf82442",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-788193": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.226"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c67579ac484f",
	                        "running-upgrade-788193"
	                    ],
	                    "NetworkID": "0c49482a636e7e0bf678e30cf238c03e59ce7b89cb053a731a7313bab513d697",
	                    "EndpointID": "3db4bfb74e5da82fc87663b96686094104fd3656ba304a25761a68724ebf96f2",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.226",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:e2",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-788193 -n running-upgrade-788193
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-788193 -n running-upgrade-788193: exit status 4 (646.498773ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 01:18:57.530645 1327453 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-788193" does not appear in /home/jenkins/minikube-integration/17486-1197516/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-788193" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-788193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-788193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-788193: (3.166219977s)
--- FAIL: TestRunningBinaryUpgrade (72.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (176.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2905975999.exe start -p missing-upgrade-631570 --memory=2200 --driver=docker  --container-runtime=crio
E1101 01:14:18.928247 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2905975999.exe start -p missing-upgrade-631570 --memory=2200 --driver=docker  --container-runtime=crio: (2m6.284062016s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-631570
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-631570: (1.813830886s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-631570
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-631570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-631570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (45.399140078s)

                                                
                                                
-- stdout --
	* [missing-upgrade-631570] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-631570 in cluster missing-upgrade-631570
	* Pulling base image ...
	* docker "missing-upgrade-631570" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 01:15:48.626407 1313919 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:15:48.626543 1313919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:15:48.626552 1313919 out.go:309] Setting ErrFile to fd 2...
	I1101 01:15:48.626558 1313919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:15:48.626817 1313919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:15:48.627173 1313919 out.go:303] Setting JSON to false
	I1101 01:15:48.628266 1313919 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32296,"bootTime":1698769053,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 01:15:48.628342 1313919 start.go:138] virtualization:  
	I1101 01:15:48.630942 1313919 out.go:177] * [missing-upgrade-631570] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 01:15:48.633369 1313919 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:15:48.635233 1313919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:15:48.633508 1313919 notify.go:220] Checking for updates...
	I1101 01:15:48.639696 1313919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:15:48.641466 1313919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 01:15:48.643318 1313919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 01:15:48.645449 1313919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:15:48.647859 1313919 config.go:182] Loaded profile config "missing-upgrade-631570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:15:48.650239 1313919 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1101 01:15:48.652124 1313919 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:15:48.676534 1313919 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 01:15:48.676642 1313919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:15:48.762513 1313919 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-11-01 01:15:48.752723738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:15:48.762615 1313919 docker.go:295] overlay module found
	I1101 01:15:48.765627 1313919 out.go:177] * Using the docker driver based on existing profile
	I1101 01:15:48.767530 1313919 start.go:298] selected driver: docker
	I1101 01:15:48.767546 1313919 start.go:902] validating driver "docker" against &{Name:missing-upgrade-631570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-631570 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.19 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 01:15:48.767644 1313919 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:15:48.768322 1313919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:15:48.840592 1313919 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:53 SystemTime:2023-11-01 01:15:48.831037866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:15:48.840951 1313919 cni.go:84] Creating CNI manager for ""
	I1101 01:15:48.840976 1313919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 01:15:48.841042 1313919 start_flags.go:323] config:
	{Name:missing-upgrade-631570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-631570 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.19 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 01:15:48.843633 1313919 out.go:177] * Starting control plane node missing-upgrade-631570 in cluster missing-upgrade-631570
	I1101 01:15:48.845726 1313919 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 01:15:48.848037 1313919 out.go:177] * Pulling base image ...
	I1101 01:15:48.850114 1313919 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1101 01:15:48.850202 1313919 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1101 01:15:48.868277 1313919 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1101 01:15:48.868466 1313919 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1101 01:15:48.868892 1313919 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1101 01:15:48.918964 1313919 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1101 01:15:48.919153 1313919 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/missing-upgrade-631570/config.json ...
	I1101 01:15:48.919267 1313919 cache.go:107] acquiring lock: {Name:mka89eb28dc72e1a46e6c55775643518cc76d2e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.919350 1313919 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 01:15:48.919361 1313919 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.142µs
	I1101 01:15:48.919370 1313919 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 01:15:48.919380 1313919 cache.go:107] acquiring lock: {Name:mkcd0eb14775904e216368f5cf607d17446ff03c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.919474 1313919 cache.go:107] acquiring lock: {Name:mk271c5f9ea5221d5c3ba6bd7ef149d160e54b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.919608 1313919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1101 01:15:48.919639 1313919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1101 01:15:48.919825 1313919 cache.go:107] acquiring lock: {Name:mk25f668b9d26dbd5166e63a4b6fd4ebaa89c209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.919944 1313919 cache.go:107] acquiring lock: {Name:mkce4c558234459005acad2f6e3084db5d193195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.919995 1313919 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1101 01:15:48.920061 1313919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1101 01:15:48.920157 1313919 cache.go:107] acquiring lock: {Name:mk1ea4c71835f0e9602ac80532fc95f154ffac3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.920323 1313919 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1101 01:15:48.920339 1313919 cache.go:107] acquiring lock: {Name:mk5c30858431dfdaab3ee3ccef6e6e01a4bd052f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.920477 1313919 cache.go:107] acquiring lock: {Name:mk835580881936495bac751ee7b074f531992fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:15:48.920563 1313919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1101 01:15:48.920616 1313919 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1101 01:15:48.921687 1313919 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1101 01:15:48.922222 1313919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1101 01:15:48.922403 1313919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1101 01:15:48.922405 1313919 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1101 01:15:48.922834 1313919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1101 01:15:48.923195 1313919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1101 01:15:48.923622 1313919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1101 01:15:49.266551 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1101 01:15:49.277787 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W1101 01:15:49.278386 1313919 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1101 01:15:49.278456 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1101 01:15:49.298913 1313919 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1101 01:15:49.299011 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1101 01:15:49.313418 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W1101 01:15:49.315823 1313919 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1101 01:15:49.315882 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1101 01:15:49.331179 1313919 cache.go:162] opening:  /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1101 01:15:49.418190 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1101 01:15:49.418264 1313919 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 498.442306ms
	I1101 01:15:49.418292 1313919 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?I1101 01:15:49.856452 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1101 01:15:49.856481 1313919 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 936.006581ms
	I1101 01:15:49.856493 1313919 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I1101 01:15:49.909593 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1101 01:15:49.909657 1313919 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 989.321411ms
	I1101 01:15:49.909685 1313919 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 29.62 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 29.62 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 29.62 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 27.71 KiB I1101 01:15:50.911944 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1101 01:15:50.911971 1313919 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.992589522s
	I1101 01:15:50.911984 1313919 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1101 01:15:51.070487 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1101 01:15:51.070549 1313919 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.150607815s
	I1101 01:15:51.070577 1313919 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 27.71 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 27.71 KiB I1101 01:15:51.473964 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1101 01:15:51.474049 1313919 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.554602305s
	I1101 01:15:51.474078 1313919 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 25.92 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 25.92 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 25.92 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 24.25 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 24.25 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 24.25 KiB     > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB  0.01% 22.69 KiB     > gcr.io/k8s-minikube/kicbase...:  5.03 MiB / 287.99 MiB  1.75% 22.69 KiB p    > gcr.io/k8s-minikube/kicbase...:  16.02 MiB / 287.99 MiB  5.56% 22.69 KiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.81 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.81 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.81 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00
% 2.63 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.63 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.63 MiB pI1101 01:15:54.384824 1313919 cache.go:157] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1101 01:15:54.384860 1313919 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.4647052s
	I1101 01:15:54.384873 1313919 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1101 01:15:54.384893 1313919 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.46 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.46 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.46 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.30 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.30 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.30 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.15 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.15 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.15 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.01 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.01 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 2.01 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00
% 1.88 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 1.88 MiB p    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 1.88 MiB p    > gcr.io/k8s-minikube/kicbase...:  33.94 MiB / 287.99 MiB  11.78% 2.62 MiB     > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 2.62 MiB     > gcr.io/k8s-minikube/kicbase...:  58.00 MiB / 287.99 MiB  20.14% 2.62 MiB     > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 6.09 MiB     > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 6.09 MiB     > gcr.io/k8s-minikube/kicbase...:  67.84 MiB / 287.99 MiB  23.56% 6.09 MiB     > gcr.io/k8s-minikube/kicbase...:  83.17 MiB / 287.99 MiB  28.88% 7.35 MiB     > gcr.io/k8s-minikube/kicbase...:  100.43 MiB / 287.99 MiB  34.87% 7.35 MiB    > gcr.io/k8s-minikube/kicbase...:  115.79 MiB / 287.99 MiB  40.21% 7.35 MiB    > gcr.io/k8s-minikube/kicbase...:  136.84 MiB / 287.99 MiB  47.52% 12.65 Mi    > gcr.io/k8s-minikube/kicbase...:  155.86 MiB / 287.99 MiB
54.12% 12.65 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 12.65 Mi    > gcr.io/k8s-minikube/kicbase...:  178.32 MiB / 287.99 MiB  61.92% 16.29 Mi    > gcr.io/k8s-minikube/kicbase...:  191.27 MiB / 287.99 MiB  66.42% 16.29 Mi    > gcr.io/k8s-minikube/kicbase...:  207.19 MiB / 287.99 MiB  71.94% 16.29 Mi    > gcr.io/k8s-minikube/kicbase...:  209.70 MiB / 287.99 MiB  72.81% 18.62 Mi    > gcr.io/k8s-minikube/kicbase...:  222.75 MiB / 287.99 MiB  77.34% 18.62 Mi    > gcr.io/k8s-minikube/kicbase...:  228.64 MiB / 287.99 MiB  79.39% 18.62 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 20.46 Mi    > gcr.io/k8s-minikube/kicbase...:  246.06 MiB / 287.99 MiB  85.44% 20.46 Mi    > gcr.io/k8s-minikube/kicbase...:  263.81 MiB / 287.99 MiB  91.60% 20.46 Mi    > gcr.io/k8s-minikube/kicbase...:  265.43 MiB / 287.99 MiB  92.17% 22.09 Mi    > gcr.io/k8s-minikube/kicbase...:  281.05 MiB / 287.99 MiB  97.59% 22.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 Mi
B  99.99% 22.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 23.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 23.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 23.09 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.60 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 21.60 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 21.60 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 20.21 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 20.76 MI1101 01:16:03.578144 1313919 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1101 01:16:03.578156 1313919 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1101 01:16:04.938348 1313919 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1101 01:16:04.938388 1313919 cache.go:194] Successfully downloaded all kic artifacts
	I1101 01:16:04.938428 1313919 start.go:365] acquiring machines lock for missing-upgrade-631570: {Name:mk102ae20cbf4f7efd9d706e732fd3f9b0a1328c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:16:04.938508 1313919 start.go:369] acquired machines lock for "missing-upgrade-631570" in 55.359µs
	I1101 01:16:04.938531 1313919 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:16:04.938538 1313919 fix.go:54] fixHost starting: 
	I1101 01:16:04.938822 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:04.960845 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:04.960903 1313919 fix.go:102] recreateIfNeeded on missing-upgrade-631570: state= err=unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:04.960920 1313919 fix.go:107] machineExists: false. err=machine does not exist
	I1101 01:16:04.962610 1313919 out.go:177] * docker "missing-upgrade-631570" container is missing, will recreate.
	I1101 01:16:04.964642 1313919 delete.go:124] DEMOLISHING missing-upgrade-631570 ...
	I1101 01:16:04.964771 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:04.988932 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	W1101 01:16:04.989012 1313919 stop.go:75] unable to get state: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:04.989031 1313919 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:04.989482 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:05.010347 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:05.010409 1313919 delete.go:82] Unable to get host status for missing-upgrade-631570, assuming it has already been deleted: state: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:05.010475 1313919 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-631570
	W1101 01:16:05.041977 1313919 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-631570 returned with exit code 1
	I1101 01:16:05.042013 1313919 kic.go:371] could not find the container missing-upgrade-631570 to remove it. will try anyways
	I1101 01:16:05.042071 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:05.061227 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	W1101 01:16:05.061812 1313919 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:05.061898 1313919 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-631570 /bin/bash -c "sudo init 0"
	W1101 01:16:05.090213 1313919 cli_runner.go:211] docker exec --privileged -t missing-upgrade-631570 /bin/bash -c "sudo init 0" returned with exit code 1
	I1101 01:16:05.090269 1313919 oci.go:650] error shutdown missing-upgrade-631570: docker exec --privileged -t missing-upgrade-631570 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:06.090474 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:06.107703 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:06.107779 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:06.107793 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:06.107820 1313919 retry.go:31] will retry after 507.837657ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:06.616582 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:06.659834 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:06.659901 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:06.659922 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:06.659944 1313919 retry.go:31] will retry after 865.690418ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:07.525830 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:07.555552 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:07.555613 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:07.555626 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:07.555653 1313919 retry.go:31] will retry after 710.206941ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:08.266162 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:08.286250 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:08.286310 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:08.286323 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:08.286346 1313919 retry.go:31] will retry after 899.268142ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:09.186184 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:09.220056 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:09.220116 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:09.220126 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:09.220150 1313919 retry.go:31] will retry after 3.07596301s: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:12.296324 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:12.313123 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:12.313196 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:12.313210 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:12.313237 1313919 retry.go:31] will retry after 5.186018593s: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:17.501144 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:17.533812 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:17.533877 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:17.533895 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:17.533919 1313919 retry.go:31] will retry after 6.136858542s: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:23.671427 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:23.691187 1313919 cli_runner.go:211] docker container inspect missing-upgrade-631570 --format={{.State.Status}} returned with exit code 1
	I1101 01:16:23.691247 1313919 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	I1101 01:16:23.691261 1313919 oci.go:664] temporary error: container missing-upgrade-631570 status is  but expect it to be exited
	I1101 01:16:23.691292 1313919 oci.go:88] couldn't shut down missing-upgrade-631570 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-631570": docker container inspect missing-upgrade-631570 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-631570
	 
	I1101 01:16:23.691342 1313919 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-631570
	I1101 01:16:23.709512 1313919 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-631570
	W1101 01:16:23.727878 1313919 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-631570 returned with exit code 1
	I1101 01:16:23.727957 1313919 cli_runner.go:164] Run: docker network inspect missing-upgrade-631570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 01:16:23.747204 1313919 cli_runner.go:164] Run: docker network rm missing-upgrade-631570
	I1101 01:16:23.856519 1313919 fix.go:114] Sleeping 1 second for extra luck!
	I1101 01:16:24.857364 1313919 start.go:125] createHost starting for "" (driver="docker")
	I1101 01:16:24.871636 1313919 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1101 01:16:24.871790 1313919 start.go:159] libmachine.API.Create for "missing-upgrade-631570" (driver="docker")
	I1101 01:16:24.871810 1313919 client.go:168] LocalClient.Create starting
	I1101 01:16:24.871877 1313919 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem
	I1101 01:16:24.871912 1313919 main.go:141] libmachine: Decoding PEM data...
	I1101 01:16:24.871926 1313919 main.go:141] libmachine: Parsing certificate...
	I1101 01:16:24.871985 1313919 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem
	I1101 01:16:24.872004 1313919 main.go:141] libmachine: Decoding PEM data...
	I1101 01:16:24.872014 1313919 main.go:141] libmachine: Parsing certificate...
	I1101 01:16:24.872760 1313919 cli_runner.go:164] Run: docker network inspect missing-upgrade-631570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 01:16:24.905164 1313919 cli_runner.go:211] docker network inspect missing-upgrade-631570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 01:16:24.905243 1313919 network_create.go:281] running [docker network inspect missing-upgrade-631570] to gather additional debugging logs...
	I1101 01:16:24.905259 1313919 cli_runner.go:164] Run: docker network inspect missing-upgrade-631570
	W1101 01:16:24.923495 1313919 cli_runner.go:211] docker network inspect missing-upgrade-631570 returned with exit code 1
	I1101 01:16:24.923524 1313919 network_create.go:284] error running [docker network inspect missing-upgrade-631570]: docker network inspect missing-upgrade-631570: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-631570 not found
	I1101 01:16:24.923543 1313919 network_create.go:286] output of [docker network inspect missing-upgrade-631570]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-631570 not found
	
	** /stderr **
	I1101 01:16:24.923655 1313919 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 01:16:24.945123 1313919 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b5f97457863e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:41:14:12:3e} reservation:<nil>}
	I1101 01:16:24.945533 1313919 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-249c110faf75 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d5:24:98:01} reservation:<nil>}
	I1101 01:16:24.945816 1313919 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-6509d09148ef IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4b:e7:42:70} reservation:<nil>}
	I1101 01:16:24.946221 1313919 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002c082d0}
	I1101 01:16:24.946239 1313919 network_create.go:124] attempt to create docker network missing-upgrade-631570 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1101 01:16:24.946298 1313919 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-631570 missing-upgrade-631570
	I1101 01:16:25.032744 1313919 network_create.go:108] docker network missing-upgrade-631570 192.168.76.0/24 created
	I1101 01:16:25.032773 1313919 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-631570" container
	I1101 01:16:25.032850 1313919 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 01:16:25.052150 1313919 cli_runner.go:164] Run: docker volume create missing-upgrade-631570 --label name.minikube.sigs.k8s.io=missing-upgrade-631570 --label created_by.minikube.sigs.k8s.io=true
	I1101 01:16:25.076059 1313919 oci.go:103] Successfully created a docker volume missing-upgrade-631570
	I1101 01:16:25.076146 1313919 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-631570-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-631570 --entrypoint /usr/bin/test -v missing-upgrade-631570:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1101 01:16:27.171285 1313919 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-631570-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-631570 --entrypoint /usr/bin/test -v missing-upgrade-631570:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (2.095097765s)
	I1101 01:16:27.171310 1313919 oci.go:107] Successfully prepared a docker volume missing-upgrade-631570
	I1101 01:16:27.171326 1313919 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1101 01:16:27.171465 1313919 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1101 01:16:27.171562 1313919 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 01:16:27.293152 1313919 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-631570 --name missing-upgrade-631570 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-631570 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-631570 --network missing-upgrade-631570 --ip 192.168.76.2 --volume missing-upgrade-631570:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1101 01:16:27.834174 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Running}}
	I1101 01:16:27.865380 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	I1101 01:16:27.912792 1313919 cli_runner.go:164] Run: docker exec missing-upgrade-631570 stat /var/lib/dpkg/alternatives/iptables
	I1101 01:16:28.027342 1313919 oci.go:144] the created container "missing-upgrade-631570" has a running status.
	I1101 01:16:28.027369 1313919 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa...
	I1101 01:16:28.927408 1313919 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 01:16:28.959377 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	I1101 01:16:28.991747 1313919 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 01:16:28.991765 1313919 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-631570 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 01:16:29.088351 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	I1101 01:16:29.133912 1313919 machine.go:88] provisioning docker machine ...
	I1101 01:16:29.133940 1313919 ubuntu.go:169] provisioning hostname "missing-upgrade-631570"
	I1101 01:16:29.134018 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:29.177247 1313919 main.go:141] libmachine: Using SSH client type: native
	I1101 01:16:29.177687 1313919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34466 <nil> <nil>}
	I1101 01:16:29.177700 1313919 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-631570 && echo "missing-upgrade-631570" | sudo tee /etc/hostname
	I1101 01:16:29.389334 1313919 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-631570
	
	I1101 01:16:29.389482 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:29.418578 1313919 main.go:141] libmachine: Using SSH client type: native
	I1101 01:16:29.418981 1313919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34466 <nil> <nil>}
	I1101 01:16:29.419002 1313919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-631570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-631570/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-631570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:16:29.601774 1313919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:16:29.601847 1313919 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 01:16:29.601910 1313919 ubuntu.go:177] setting up certificates
	I1101 01:16:29.601954 1313919 provision.go:83] configureAuth start
	I1101 01:16:29.602052 1313919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-631570
	I1101 01:16:29.640508 1313919 provision.go:138] copyHostCerts
	I1101 01:16:29.640580 1313919 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 01:16:29.640591 1313919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:16:29.640675 1313919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 01:16:29.640835 1313919 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 01:16:29.640841 1313919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:16:29.640871 1313919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 01:16:29.640929 1313919 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 01:16:29.640934 1313919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:16:29.640967 1313919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 01:16:29.641051 1313919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-631570 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-631570]
	I1101 01:16:30.323943 1313919 provision.go:172] copyRemoteCerts
	I1101 01:16:30.324025 1313919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:16:30.324078 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:30.356187 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:30.475954 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:16:30.534590 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 01:16:30.573808 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:16:30.615010 1313919 provision.go:86] duration metric: configureAuth took 1.013024793s
	I1101 01:16:30.615037 1313919 ubuntu.go:193] setting minikube options for container-runtime
	I1101 01:16:30.615236 1313919 config.go:182] Loaded profile config "missing-upgrade-631570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:16:30.615343 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:30.677350 1313919 main.go:141] libmachine: Using SSH client type: native
	I1101 01:16:30.677768 1313919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34466 <nil> <nil>}
	I1101 01:16:30.677790 1313919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:16:31.157257 1313919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:16:31.157277 1313919 machine.go:91] provisioned docker machine in 2.023347335s
	I1101 01:16:31.157287 1313919 client.go:171] LocalClient.Create took 6.285471942s
	I1101 01:16:31.157300 1313919 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-631570" took 6.285511933s
	I1101 01:16:31.157308 1313919 start.go:300] post-start starting for "missing-upgrade-631570" (driver="docker")
	I1101 01:16:31.157319 1313919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:16:31.157403 1313919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:16:31.157449 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:31.176489 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:31.283302 1313919 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:16:31.287575 1313919 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 01:16:31.287648 1313919 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 01:16:31.287680 1313919 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 01:16:31.287725 1313919 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1101 01:16:31.287753 1313919 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 01:16:31.287838 1313919 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 01:16:31.287976 1313919 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 01:16:31.288147 1313919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:16:31.296702 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:16:31.321452 1313919 start.go:303] post-start completed in 164.12692ms
	I1101 01:16:31.321807 1313919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-631570
	I1101 01:16:31.342522 1313919 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/missing-upgrade-631570/config.json ...
	I1101 01:16:31.342815 1313919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:16:31.342868 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:31.362805 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:31.461358 1313919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 01:16:31.467736 1313919 start.go:128] duration metric: createHost completed in 6.610336051s
	I1101 01:16:31.467838 1313919 cli_runner.go:164] Run: docker container inspect missing-upgrade-631570 --format={{.State.Status}}
	W1101 01:16:31.493500 1313919 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:16:31.493522 1313919 machine.go:88] provisioning docker machine ...
	I1101 01:16:31.493539 1313919 ubuntu.go:169] provisioning hostname "missing-upgrade-631570"
	I1101 01:16:31.493602 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:31.524761 1313919 main.go:141] libmachine: Using SSH client type: native
	I1101 01:16:31.525266 1313919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34466 <nil> <nil>}
	I1101 01:16:31.525293 1313919 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-631570 && echo "missing-upgrade-631570" | sudo tee /etc/hostname
	I1101 01:16:31.686465 1313919 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-631570
	
	I1101 01:16:31.686604 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:31.708909 1313919 main.go:141] libmachine: Using SSH client type: native
	I1101 01:16:31.709354 1313919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34466 <nil> <nil>}
	I1101 01:16:31.709374 1313919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-631570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-631570/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-631570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:16:31.854026 1313919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:16:31.854094 1313919 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 01:16:31.854138 1313919 ubuntu.go:177] setting up certificates
	I1101 01:16:31.854174 1313919 provision.go:83] configureAuth start
	I1101 01:16:31.854260 1313919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-631570
	I1101 01:16:31.879574 1313919 provision.go:138] copyHostCerts
	I1101 01:16:31.879634 1313919 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 01:16:31.879643 1313919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:16:31.879716 1313919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 01:16:31.879806 1313919 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 01:16:31.879812 1313919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:16:31.879836 1313919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 01:16:31.879885 1313919 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 01:16:31.879890 1313919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:16:31.879912 1313919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 01:16:31.879956 1313919 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-631570 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-631570]
	I1101 01:16:32.266520 1313919 provision.go:172] copyRemoteCerts
	I1101 01:16:32.266591 1313919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:16:32.266649 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:32.284565 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:32.381899 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 01:16:32.404454 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:16:32.426879 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 01:16:32.449868 1313919 provision.go:86] duration metric: configureAuth took 595.66077ms
	I1101 01:16:32.449896 1313919 ubuntu.go:193] setting minikube options for container-runtime
	I1101 01:16:32.450080 1313919 config.go:182] Loaded profile config "missing-upgrade-631570": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:16:32.450191 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:32.468609 1313919 main.go:141] libmachine: Using SSH client type: native
	I1101 01:16:32.469050 1313919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34466 <nil> <nil>}
	I1101 01:16:32.469076 1313919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:16:32.780883 1313919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:16:32.780904 1313919 machine.go:91] provisioned docker machine in 1.287374416s
	I1101 01:16:32.780914 1313919 start.go:300] post-start starting for "missing-upgrade-631570" (driver="docker")
	I1101 01:16:32.780924 1313919 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:16:32.781037 1313919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:16:32.781079 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:32.799666 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:32.898157 1313919 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:16:32.902111 1313919 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 01:16:32.902138 1313919 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 01:16:32.902150 1313919 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 01:16:32.902157 1313919 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1101 01:16:32.902169 1313919 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 01:16:32.902242 1313919 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 01:16:32.902335 1313919 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 01:16:32.902528 1313919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:16:32.911131 1313919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:16:32.933530 1313919 start.go:303] post-start completed in 152.600684ms
	I1101 01:16:32.933612 1313919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:16:32.933665 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:32.952213 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:33.047406 1313919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 01:16:33.053307 1313919 fix.go:56] fixHost completed within 28.114760625s
	I1101 01:16:33.053335 1313919 start.go:83] releasing machines lock for "missing-upgrade-631570", held for 28.114815804s
	I1101 01:16:33.053412 1313919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-631570
	I1101 01:16:33.072654 1313919 ssh_runner.go:195] Run: cat /version.json
	I1101 01:16:33.072710 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:33.073076 1313919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:16:33.073155 1313919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631570
	I1101 01:16:33.093104 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	I1101 01:16:33.093264 1313919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34466 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/missing-upgrade-631570/id_rsa Username:docker}
	W1101 01:16:33.190021 1313919 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1101 01:16:33.190105 1313919 ssh_runner.go:195] Run: systemctl --version
	I1101 01:16:33.310838 1313919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:16:33.403856 1313919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 01:16:33.409407 1313919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:16:33.445138 1313919 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 01:16:33.445267 1313919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:16:33.479618 1313919 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:16:33.479641 1313919 start.go:472] detecting cgroup driver to use...
	I1101 01:16:33.479683 1313919 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 01:16:33.479757 1313919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:16:33.507349 1313919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:16:33.519065 1313919 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:16:33.519179 1313919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:16:33.531107 1313919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:16:33.542869 1313919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1101 01:16:33.555749 1313919 docker.go:214] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1101 01:16:33.555817 1313919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:16:33.664118 1313919 docker.go:220] disabling docker service ...
	I1101 01:16:33.664233 1313919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:16:33.677855 1313919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:16:33.689912 1313919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:16:33.794398 1313919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:16:33.901803 1313919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:16:33.913918 1313919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:16:33.931255 1313919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1101 01:16:33.931341 1313919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:16:33.944015 1313919 out.go:177] 
	W1101 01:16:33.945807 1313919 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1101 01:16:33.945838 1313919 out.go:239] * 
	* 
	W1101 01:16:33.946865 1313919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 01:16:33.949208 1313919 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-631570 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-11-01 01:16:33.999698219 +0000 UTC m=+2670.545733990
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-631570
helpers_test.go:235: (dbg) docker inspect missing-upgrade-631570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b465d147a4596bcace51ffa882c9396ec9e6e7da86d414e3e374ec142f8749a7",
	        "Created": "2023-11-01T01:16:27.311446453Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1316254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-01T01:16:27.825970421Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/b465d147a4596bcace51ffa882c9396ec9e6e7da86d414e3e374ec142f8749a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b465d147a4596bcace51ffa882c9396ec9e6e7da86d414e3e374ec142f8749a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b465d147a4596bcace51ffa882c9396ec9e6e7da86d414e3e374ec142f8749a7/hosts",
	        "LogPath": "/var/lib/docker/containers/b465d147a4596bcace51ffa882c9396ec9e6e7da86d414e3e374ec142f8749a7/b465d147a4596bcace51ffa882c9396ec9e6e7da86d414e3e374ec142f8749a7-json.log",
	        "Name": "/missing-upgrade-631570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-631570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-631570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f7e4c07abb852a4cd13980632f52a459d0ce2702dcc7cdc7f74967e81d4a8e4-init/diff:/var/lib/docker/overlay2/4deeb9ad97a978e44215957f0cdc691c12ba30d49682d14a7596f099db22cf5d/diff:/var/lib/docker/overlay2/109351bdb7509db71992e440a25d1f7233410b7c329c393eeae742a9e70cd7df/diff:/var/lib/docker/overlay2/4add0dda98a83957f984c2f8daf0bc5659792266d7fb3abc8df6102f3ad5a762/diff:/var/lib/docker/overlay2/11c869eb2bcd500211f63f53aed13e3ed0e1427873d4745d7a7aeb191abf7c30/diff:/var/lib/docker/overlay2/cb449fc3a76ffbb69d47ec7057ddb012e323fed073bfe2337307f3aaabd845ef/diff:/var/lib/docker/overlay2/8fa713f88ab565cfaf910098ffbd2ff871a01897d78d0ca27d71c702588618e6/diff:/var/lib/docker/overlay2/ec9850d2c2d9357d51b44cd159894e2cdd4d9e96f14e655b4ceb20cdb5adc9cd/diff:/var/lib/docker/overlay2/db96ba35671f07c2041d1f136cb97d92e9d26993865c5b7fbe4f2b9d363e0a6e/diff:/var/lib/docker/overlay2/b29a8e75a61c50dd3b97e0eb580dcfca0cc565fe0f84d6b6ff93bd72eae613a0/diff:/var/lib/docker/overlay2/fb3b60
276ca3c1c96c257e7940107f518701b47335f1c8791f58439bde6b0a5a/diff:/var/lib/docker/overlay2/9aa44568523ceb1a4f89ebd76480258c5083419798a5c4f60616aea24aa3ad64/diff:/var/lib/docker/overlay2/675516c2ef0ec4e501598e8e9cb11e6b5c3ac73515d238a1a5fad94297dc9013/diff:/var/lib/docker/overlay2/d5d4fb96182ae320f46da0127115271f95c44fbfedfe59bc4052fa38f15f7e32/diff:/var/lib/docker/overlay2/b991fb10928849aa1423df8f4f53fd2f87aac34843c7d9e00ebfc47fd5c570a9/diff:/var/lib/docker/overlay2/d09a56ee1f6ec357862277876d51da16d1f038a21e9385673064ed140ea487a9/diff:/var/lib/docker/overlay2/f414f4bec00be64944305b107fcef70fc1eea5d037c4ff6028922755e16da506/diff:/var/lib/docker/overlay2/af32866dd45e1ab125b132d9b0a6a84a83eca8b71caf1e4f9e4a2d9fa7ab8fb8/diff:/var/lib/docker/overlay2/6e459be98b46bfbc21c2d09b15923fe07aa14db3ce7183678bb420d421923b80/diff:/var/lib/docker/overlay2/ee04458ac155a9273b8e2120f8a46273ec79caf38c16f06b4619fcf4cf844344/diff:/var/lib/docker/overlay2/7d220da3c58397d7936d6c1a363032274aefdae461eedabb210100f47ca2fdfc/diff:/var/lib/d
ocker/overlay2/aae8d878c9c08286ba04cd4959779d6d10f6d621ffbfd33313c3b3d5678b0616/diff:/var/lib/docker/overlay2/55d8efaaafbb5ab632cdc5795429d2a36a8cf9aa3e194d2cadd036f3522ce772/diff:/var/lib/docker/overlay2/1c71c83cace6f0076098d03e301a49b2b087b88af06690e685cb83352ead9e2d/diff:/var/lib/docker/overlay2/f7f6c65cd4457e421734a23770c6c0e6be9c3ebd5d9da24a3e5bda7c6919da22/diff:/var/lib/docker/overlay2/b5d283313d6b9a53b997163cfa21be94a7abc49faf8ff91e2a767a5e881f6294/diff:/var/lib/docker/overlay2/7b19993c3307232ac5a3c8189c9e8d6fd412f7efc5135b50dd2d71b16db098e4/diff:/var/lib/docker/overlay2/f8d23aa0114cdeaa885e815735be71171338d14c4a121edc02531cea5f325998/diff:/var/lib/docker/overlay2/a8ba83ee93cb495ef40dd7dfa73b21f2644072e0f544d275dae2eed4da80e845/diff:/var/lib/docker/overlay2/fe5423e38df3feadc753f5889745b3335b38b9a3cf14b293a2c4f0995f3b8cbd/diff:/var/lib/docker/overlay2/8c8f9713dadaaf31731351aea78e24cff26fb616190e6f9537c4d3348ee60d17/diff:/var/lib/docker/overlay2/5fad31d1922617b84b3c609b7d4a0d2e20ea43516e55c3b79903ef78bc0
01abe/diff:/var/lib/docker/overlay2/5b11ca5aef83b057cec0efbb794c5112247793bc48695d9c71e79fcce017a446/diff:/var/lib/docker/overlay2/5687f070e92f825a553bffa781aba79232fc877989d591eef9702e3a9c4bdeb2/diff:/var/lib/docker/overlay2/620bef9dbd4b2b357afbb20a28e4d3802385bd557674c2503cb449fe91eab73d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f7e4c07abb852a4cd13980632f52a459d0ce2702dcc7cdc7f74967e81d4a8e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f7e4c07abb852a4cd13980632f52a459d0ce2702dcc7cdc7f74967e81d4a8e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f7e4c07abb852a4cd13980632f52a459d0ce2702dcc7cdc7f74967e81d4a8e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-631570",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-631570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-631570",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-631570",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-631570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4c0e4592c3480b6ac552ea4efe4ee153f3a7c883d5c32df5cfe5eeab6a8f7a3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34465"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34464"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34463"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e4c0e4592c34",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-631570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b465d147a459",
	                        "missing-upgrade-631570"
	                    ],
	                    "NetworkID": "9c732f8fd0f1cb52487cd3bb5a77dab8a0f3506caa737d117e5c3dfbe05e3771",
	                    "EndpointID": "13cf8379b1fd1675a7b3f7fa7e8143c074b9cf6cbb5abb4dd8882ef9842552e4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-631570 -n missing-upgrade-631570
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-631570 -n missing-upgrade-631570: exit status 6 (352.530884ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 01:16:34.354772 1317275 status.go:415] kubeconfig endpoint: got: 192.168.59.19:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-631570" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-631570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-631570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-631570: (1.900145316s)
--- FAIL: TestMissingContainerUpgrade (176.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.817246888.exe start -p stopped-upgrade-506779 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1101 01:17:02.259671 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.817246888.exe start -p stopped-upgrade-506779 --memory=2200 --vm-driver=docker  --container-runtime=crio: (59.423942402s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.817246888.exe -p stopped-upgrade-506779 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.817246888.exe -p stopped-upgrade-506779 stop: (2.432645375s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-506779 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-506779 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.433825069s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-506779] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-506779 in cluster stopped-upgrade-506779
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-506779" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 01:17:39.273948 1321067 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:17:39.274146 1321067 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:17:39.274153 1321067 out.go:309] Setting ErrFile to fd 2...
	I1101 01:17:39.274158 1321067 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:17:39.274426 1321067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:17:39.274769 1321067 out.go:303] Setting JSON to false
	I1101 01:17:39.275718 1321067 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32407,"bootTime":1698769053,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 01:17:39.275784 1321067 start.go:138] virtualization:  
	I1101 01:17:39.279664 1321067 out.go:177] * [stopped-upgrade-506779] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 01:17:39.281573 1321067 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1101 01:17:39.288164 1321067 notify.go:220] Checking for updates...
	I1101 01:17:39.294278 1321067 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:17:39.296480 1321067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:17:39.298674 1321067 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:17:39.300837 1321067 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 01:17:39.302914 1321067 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 01:17:39.304606 1321067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:17:39.307006 1321067 config.go:182] Loaded profile config "stopped-upgrade-506779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:17:39.309242 1321067 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1101 01:17:39.311225 1321067 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:17:39.371525 1321067 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 01:17:39.371640 1321067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:17:39.477612 1321067 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1101 01:17:39.488767 1321067 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 01:17:39.478342079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:17:39.488875 1321067 docker.go:295] overlay module found
	I1101 01:17:39.492968 1321067 out.go:177] * Using the docker driver based on existing profile
	I1101 01:17:39.495154 1321067 start.go:298] selected driver: docker
	I1101 01:17:39.495169 1321067 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-506779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-506779 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.156 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 01:17:39.495274 1321067 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:17:39.496460 1321067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:17:39.573719 1321067 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 01:17:39.563968033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:17:39.574062 1321067 cni.go:84] Creating CNI manager for ""
	I1101 01:17:39.574082 1321067 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 01:17:39.574097 1321067 start_flags.go:323] config:
	{Name:stopped-upgrade-506779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-506779 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.156 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1101 01:17:39.577291 1321067 out.go:177] * Starting control plane node stopped-upgrade-506779 in cluster stopped-upgrade-506779
	I1101 01:17:39.579242 1321067 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 01:17:39.581464 1321067 out.go:177] * Pulling base image ...
	I1101 01:17:39.583333 1321067 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1101 01:17:39.583418 1321067 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1101 01:17:39.601605 1321067 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1101 01:17:39.601633 1321067 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1101 01:17:39.650247 1321067 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1101 01:17:39.650398 1321067 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/stopped-upgrade-506779/config.json ...
	I1101 01:17:39.650480 1321067 cache.go:107] acquiring lock: {Name:mka89eb28dc72e1a46e6c55775643518cc76d2e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650564 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1101 01:17:39.650574 1321067 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.184µs
	I1101 01:17:39.650583 1321067 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1101 01:17:39.650593 1321067 cache.go:107] acquiring lock: {Name:mkcd0eb14775904e216368f5cf607d17446ff03c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650623 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1101 01:17:39.650628 1321067 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.881µs
	I1101 01:17:39.650640 1321067 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1101 01:17:39.650648 1321067 cache.go:194] Successfully downloaded all kic artifacts
	I1101 01:17:39.650649 1321067 cache.go:107] acquiring lock: {Name:mkce4c558234459005acad2f6e3084db5d193195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650669 1321067 start.go:365] acquiring machines lock for stopped-upgrade-506779: {Name:mk69dce0511c62603c2929496d4a8df53b36a126 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650675 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1101 01:17:39.650683 1321067 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 35.175µs
	I1101 01:17:39.650692 1321067 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1101 01:17:39.650700 1321067 cache.go:107] acquiring lock: {Name:mk5c30858431dfdaab3ee3ccef6e6e01a4bd052f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650705 1321067 start.go:369] acquired machines lock for "stopped-upgrade-506779" in 24.262µs
	I1101 01:17:39.650718 1321067 start.go:96] Skipping create...Using existing machine configuration
	I1101 01:17:39.650724 1321067 fix.go:54] fixHost starting: 
	I1101 01:17:39.650727 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1101 01:17:39.650732 1321067 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 33.46µs
	I1101 01:17:39.650738 1321067 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1101 01:17:39.650746 1321067 cache.go:107] acquiring lock: {Name:mk271c5f9ea5221d5c3ba6bd7ef149d160e54b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650774 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1101 01:17:39.650779 1321067 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 34.207µs
	I1101 01:17:39.650786 1321067 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1101 01:17:39.650793 1321067 cache.go:107] acquiring lock: {Name:mk25f668b9d26dbd5166e63a4b6fd4ebaa89c209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650817 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1101 01:17:39.650821 1321067 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.292µs
	I1101 01:17:39.650827 1321067 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1101 01:17:39.650835 1321067 cache.go:107] acquiring lock: {Name:mk1ea4c71835f0e9602ac80532fc95f154ffac3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650861 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1101 01:17:39.650866 1321067 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 31.614µs
	I1101 01:17:39.650872 1321067 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1101 01:17:39.650879 1321067 cache.go:107] acquiring lock: {Name:mk835580881936495bac751ee7b074f531992fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 01:17:39.650905 1321067 cache.go:115] /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1101 01:17:39.650909 1321067 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 31.245µs
	I1101 01:17:39.650915 1321067 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1101 01:17:39.650920 1321067 cache.go:87] Successfully saved all images to host disk.
	I1101 01:17:39.650976 1321067 cli_runner.go:164] Run: docker container inspect stopped-upgrade-506779 --format={{.State.Status}}
	I1101 01:17:39.671214 1321067 fix.go:102] recreateIfNeeded on stopped-upgrade-506779: state=Stopped err=<nil>
	W1101 01:17:39.671243 1321067 fix.go:128] unexpected machine state, will restart: <nil>
	I1101 01:17:39.673570 1321067 out.go:177] * Restarting existing docker container for "stopped-upgrade-506779" ...
	I1101 01:17:39.675854 1321067 cli_runner.go:164] Run: docker start stopped-upgrade-506779
	I1101 01:17:40.041014 1321067 cli_runner.go:164] Run: docker container inspect stopped-upgrade-506779 --format={{.State.Status}}
	I1101 01:17:40.073292 1321067 kic.go:430] container "stopped-upgrade-506779" state is running.
	I1101 01:17:40.073692 1321067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-506779
	I1101 01:17:40.098826 1321067 profile.go:148] Saving config to /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/stopped-upgrade-506779/config.json ...
	I1101 01:17:40.099077 1321067 machine.go:88] provisioning docker machine ...
	I1101 01:17:40.099127 1321067 ubuntu.go:169] provisioning hostname "stopped-upgrade-506779"
	I1101 01:17:40.099189 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:40.125984 1321067 main.go:141] libmachine: Using SSH client type: native
	I1101 01:17:40.126406 1321067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34474 <nil> <nil>}
	I1101 01:17:40.126425 1321067 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-506779 && echo "stopped-upgrade-506779" | sudo tee /etc/hostname
	I1101 01:17:40.127012 1321067 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34352->127.0.0.1:34474: read: connection reset by peer
	I1101 01:17:43.275928 1321067 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-506779
	
	I1101 01:17:43.276007 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:43.294350 1321067 main.go:141] libmachine: Using SSH client type: native
	I1101 01:17:43.294758 1321067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34474 <nil> <nil>}
	I1101 01:17:43.294783 1321067 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-506779' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-506779/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-506779' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 01:17:43.438973 1321067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1101 01:17:43.439001 1321067 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17486-1197516/.minikube CaCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17486-1197516/.minikube}
	I1101 01:17:43.439030 1321067 ubuntu.go:177] setting up certificates
	I1101 01:17:43.439039 1321067 provision.go:83] configureAuth start
	I1101 01:17:43.439102 1321067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-506779
	I1101 01:17:43.459840 1321067 provision.go:138] copyHostCerts
	I1101 01:17:43.459908 1321067 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem, removing ...
	I1101 01:17:43.459939 1321067 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem
	I1101 01:17:43.460021 1321067 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/ca.pem (1082 bytes)
	I1101 01:17:43.460119 1321067 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem, removing ...
	I1101 01:17:43.460129 1321067 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem
	I1101 01:17:43.460157 1321067 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/cert.pem (1123 bytes)
	I1101 01:17:43.460217 1321067 exec_runner.go:144] found /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem, removing ...
	I1101 01:17:43.460227 1321067 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem
	I1101 01:17:43.460252 1321067 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17486-1197516/.minikube/key.pem (1675 bytes)
	I1101 01:17:43.460294 1321067 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-506779 san=[192.168.59.156 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-506779]
	I1101 01:17:43.723603 1321067 provision.go:172] copyRemoteCerts
	I1101 01:17:43.723671 1321067 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 01:17:43.723714 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:43.742289 1321067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34474 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/stopped-upgrade-506779/id_rsa Username:docker}
	I1101 01:17:43.842192 1321067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 01:17:43.865224 1321067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 01:17:43.889869 1321067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 01:17:43.913255 1321067 provision.go:86] duration metric: configureAuth took 474.201584ms
	I1101 01:17:43.913324 1321067 ubuntu.go:193] setting minikube options for container-runtime
	I1101 01:17:43.913537 1321067 config.go:182] Loaded profile config "stopped-upgrade-506779": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1101 01:17:43.913647 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:43.935306 1321067 main.go:141] libmachine: Using SSH client type: native
	I1101 01:17:43.935726 1321067 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ae610] 0x3b0d80 <nil>  [] 0s} 127.0.0.1 34474 <nil> <nil>}
	I1101 01:17:43.935747 1321067 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 01:17:44.362269 1321067 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 01:17:44.362295 1321067 machine.go:91] provisioned docker machine in 4.263192918s
	I1101 01:17:44.362306 1321067 start.go:300] post-start starting for "stopped-upgrade-506779" (driver="docker")
	I1101 01:17:44.362317 1321067 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 01:17:44.362383 1321067 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 01:17:44.362428 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:44.380674 1321067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34474 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/stopped-upgrade-506779/id_rsa Username:docker}
	I1101 01:17:44.478198 1321067 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 01:17:44.482068 1321067 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1101 01:17:44.482094 1321067 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 01:17:44.482106 1321067 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1101 01:17:44.482121 1321067 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1101 01:17:44.482158 1321067 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/addons for local assets ...
	I1101 01:17:44.482227 1321067 filesync.go:126] Scanning /home/jenkins/minikube-integration/17486-1197516/.minikube/files for local assets ...
	I1101 01:17:44.482325 1321067 filesync.go:149] local asset: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem -> 12028972.pem in /etc/ssl/certs
	I1101 01:17:44.482451 1321067 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 01:17:44.491411 1321067 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/ssl/certs/12028972.pem --> /etc/ssl/certs/12028972.pem (1708 bytes)
	I1101 01:17:44.514304 1321067 start.go:303] post-start completed in 151.981854ms
	I1101 01:17:44.514384 1321067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:17:44.514430 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:44.534473 1321067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34474 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/stopped-upgrade-506779/id_rsa Username:docker}
	I1101 01:17:44.630914 1321067 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 01:17:44.636570 1321067 fix.go:56] fixHost completed within 4.985829801s
	I1101 01:17:44.636640 1321067 start.go:83] releasing machines lock for "stopped-upgrade-506779", held for 4.985926827s
	I1101 01:17:44.636732 1321067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-506779
	I1101 01:17:44.658301 1321067 ssh_runner.go:195] Run: cat /version.json
	I1101 01:17:44.658360 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:44.658643 1321067 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 01:17:44.658702 1321067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-506779
	I1101 01:17:44.681374 1321067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34474 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/stopped-upgrade-506779/id_rsa Username:docker}
	I1101 01:17:44.692114 1321067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34474 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/stopped-upgrade-506779/id_rsa Username:docker}
	W1101 01:17:44.849637 1321067 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1101 01:17:44.849721 1321067 ssh_runner.go:195] Run: systemctl --version
	I1101 01:17:44.854837 1321067 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 01:17:45.037327 1321067 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1101 01:17:45.043822 1321067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:17:45.070407 1321067 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1101 01:17:45.070509 1321067 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 01:17:45.106318 1321067 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 01:17:45.106350 1321067 start.go:472] detecting cgroup driver to use...
	I1101 01:17:45.106390 1321067 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1101 01:17:45.106444 1321067 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 01:17:45.137513 1321067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 01:17:45.151242 1321067 docker.go:204] disabling cri-docker service (if available) ...
	I1101 01:17:45.151334 1321067 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 01:17:45.164270 1321067 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 01:17:45.177716 1321067 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1101 01:17:45.193275 1321067 docker.go:214] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1101 01:17:45.193347 1321067 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 01:17:45.300827 1321067 docker.go:220] disabling docker service ...
	I1101 01:17:45.300907 1321067 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 01:17:45.314551 1321067 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 01:17:45.326669 1321067 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 01:17:45.438373 1321067 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 01:17:45.546710 1321067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 01:17:45.558691 1321067 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 01:17:45.575429 1321067 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1101 01:17:45.575544 1321067 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 01:17:45.588524 1321067 out.go:177] 
	W1101 01:17:45.590434 1321067 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1101 01:17:45.590467 1321067 out.go:239] * 
	* 
	W1101 01:17:45.592154 1321067 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 01:17:45.595082 1321067 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-506779 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (68.29s)

                                                
                                    

Test pass (271/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.82
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.3/json-events 11.36
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.63
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
25 TestAddons/Setup 148.31
27 TestAddons/parallel/Registry 16.53
29 TestAddons/parallel/InspektorGadget 11.02
30 TestAddons/parallel/MetricsServer 5.93
33 TestAddons/parallel/CSI 69.72
34 TestAddons/parallel/Headlamp 11.17
35 TestAddons/parallel/CloudSpanner 5.63
36 TestAddons/parallel/LocalPath 9.32
37 TestAddons/parallel/NvidiaDevicePlugin 5.57
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/StoppedEnableDisable 12.38
42 TestCertOptions 37.85
43 TestCertExpiration 254.99
45 TestForceSystemdFlag 43.87
46 TestForceSystemdEnv 44.3
52 TestErrorSpam/setup 30.45
53 TestErrorSpam/start 0.87
54 TestErrorSpam/status 1.19
55 TestErrorSpam/pause 1.92
56 TestErrorSpam/unpause 2.03
57 TestErrorSpam/stop 1.5
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 48.78
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 40.71
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.75
69 TestFunctional/serial/CacheCmd/cache/add_local 1.13
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.21
74 TestFunctional/serial/CacheCmd/cache/delete 0.15
75 TestFunctional/serial/MinikubeKubectlCmd 0.16
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
77 TestFunctional/serial/ExtraConfig 33.01
78 TestFunctional/serial/ComponentHealth 0.1
79 TestFunctional/serial/LogsCmd 1.84
80 TestFunctional/serial/LogsFileCmd 1.87
81 TestFunctional/serial/InvalidService 5
83 TestFunctional/parallel/ConfigCmd 0.63
84 TestFunctional/parallel/DashboardCmd 6.92
85 TestFunctional/parallel/DryRun 0.51
86 TestFunctional/parallel/InternationalLanguage 0.21
87 TestFunctional/parallel/StatusCmd 1.15
91 TestFunctional/parallel/ServiceCmdConnect 36.68
92 TestFunctional/parallel/AddonsCmd 0.17
95 TestFunctional/parallel/SSHCmd 0.79
96 TestFunctional/parallel/CpCmd 1.71
98 TestFunctional/parallel/FileSync 0.3
99 TestFunctional/parallel/CertSync 1.86
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.84
107 TestFunctional/parallel/License 0.38
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.39
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
120 TestFunctional/parallel/ServiceCmd/List 0.56
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
123 TestFunctional/parallel/ServiceCmd/Format 0.41
124 TestFunctional/parallel/ServiceCmd/URL 0.41
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
126 TestFunctional/parallel/ProfileCmd/profile_list 0.46
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
128 TestFunctional/parallel/MountCmd/any-port 23.35
129 TestFunctional/parallel/MountCmd/specific-port 1.78
130 TestFunctional/parallel/MountCmd/VerifyCleanup 2.23
131 TestFunctional/parallel/Version/short 0.08
132 TestFunctional/parallel/Version/components 0.9
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
137 TestFunctional/parallel/ImageCommands/ImageBuild 2.94
138 TestFunctional/parallel/ImageCommands/Setup 1.77
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.31
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.93
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.51
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.93
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
149 TestFunctional/delete_addon-resizer_images 0.08
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 97.5
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.67
162 TestJSONOutput/start/Command 75.41
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.85
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.75
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.99
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.26
187 TestKicCustomNetwork/create_custom_network 42.7
188 TestKicCustomNetwork/use_default_bridge_network 34.73
189 TestKicExistingNetwork 34.5
190 TestKicCustomSubnet 36.68
191 TestKicStaticIP 34.29
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 75.12
196 TestMountStart/serial/StartWithMountFirst 9.74
197 TestMountStart/serial/VerifyMountFirst 0.31
198 TestMountStart/serial/StartWithMountSecond 6.82
199 TestMountStart/serial/VerifyMountSecond 0.29
200 TestMountStart/serial/DeleteFirst 1.69
201 TestMountStart/serial/VerifyMountPostDelete 0.29
202 TestMountStart/serial/Stop 1.26
203 TestMountStart/serial/RestartStopped 8.09
204 TestMountStart/serial/VerifyMountPostStop 0.3
207 TestMultiNode/serial/FreshStart2Nodes 121.13
208 TestMultiNode/serial/DeployApp2Nodes 5.43
210 TestMultiNode/serial/AddNode 49.36
211 TestMultiNode/serial/ProfileList 0.38
212 TestMultiNode/serial/CopyFile 11.34
213 TestMultiNode/serial/StopNode 2.39
214 TestMultiNode/serial/StartAfterStop 12.51
215 TestMultiNode/serial/RestartKeepsNodes 119.79
216 TestMultiNode/serial/DeleteNode 5.11
217 TestMultiNode/serial/StopMultiNode 24.1
218 TestMultiNode/serial/RestartMultiNode 78.55
219 TestMultiNode/serial/ValidateNameConflict 33.24
224 TestPreload 178.62
226 TestScheduledStopUnix 105.63
229 TestInsufficientStorage 13.68
232 TestKubernetesUpgrade 391.11
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 42.71
237 TestNoKubernetes/serial/StartWithStopK8s 12.8
238 TestNoKubernetes/serial/Start 9.63
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
240 TestNoKubernetes/serial/ProfileList 1.07
241 TestNoKubernetes/serial/Stop 1.29
242 TestNoKubernetes/serial/StartNoArgs 7.22
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.53
244 TestStoppedBinaryUpgrade/Setup 1.04
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
255 TestPause/serial/Start 80.58
256 TestPause/serial/SecondStartNoReconfiguration 42.79
257 TestPause/serial/Pause 0.84
258 TestPause/serial/VerifyStatus 0.37
259 TestPause/serial/Unpause 0.74
260 TestPause/serial/PauseAgain 0.99
261 TestPause/serial/DeletePaused 2.85
262 TestPause/serial/VerifyDeletedResources 0.37
270 TestNetworkPlugins/group/false 5.45
275 TestStartStop/group/old-k8s-version/serial/FirstStart 140.07
276 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
278 TestStartStop/group/old-k8s-version/serial/Stop 12.38
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
280 TestStartStop/group/old-k8s-version/serial/SecondStart 442.13
282 TestStartStop/group/no-preload/serial/FirstStart 67.5
283 TestStartStop/group/no-preload/serial/DeployApp 9.47
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
285 TestStartStop/group/no-preload/serial/Stop 12.1
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
287 TestStartStop/group/no-preload/serial/SecondStart 349.04
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.19
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.53
291 TestStartStop/group/old-k8s-version/serial/Pause 5.3
292 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.04
294 TestStartStop/group/embed-certs/serial/FirstStart 85.72
295 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
296 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
297 TestStartStop/group/no-preload/serial/Pause 4.38
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.62
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.51
301 TestStartStop/group/embed-certs/serial/DeployApp 10.5
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.25
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
305 TestStartStop/group/embed-certs/serial/Stop 12.1
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
307 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 352.58
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
309 TestStartStop/group/embed-certs/serial/SecondStart 613.82
310 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.03
311 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
312 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.57
313 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.6
315 TestStartStop/group/newest-cni/serial/FirstStart 46.57
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
318 TestStartStop/group/newest-cni/serial/Stop 1.28
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
320 TestStartStop/group/newest-cni/serial/SecondStart 29.88
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
324 TestStartStop/group/newest-cni/serial/Pause 3.31
325 TestNetworkPlugins/group/auto/Start 84.32
326 TestNetworkPlugins/group/auto/KubeletFlags 0.35
327 TestNetworkPlugins/group/auto/NetCatPod 11.34
328 TestNetworkPlugins/group/auto/DNS 0.22
329 TestNetworkPlugins/group/auto/Localhost 0.2
330 TestNetworkPlugins/group/auto/HairPin 0.18
331 TestNetworkPlugins/group/kindnet/Start 46.81
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
336 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
337 TestStartStop/group/embed-certs/serial/Pause 4.76
338 TestNetworkPlugins/group/kindnet/NetCatPod 12.42
339 TestNetworkPlugins/group/calico/Start 83.44
340 TestNetworkPlugins/group/kindnet/DNS 0.22
341 TestNetworkPlugins/group/kindnet/Localhost 0.21
342 TestNetworkPlugins/group/kindnet/HairPin 0.19
343 TestNetworkPlugins/group/custom-flannel/Start 68.78
344 TestNetworkPlugins/group/calico/ControllerPod 5.05
345 TestNetworkPlugins/group/calico/KubeletFlags 0.34
346 TestNetworkPlugins/group/calico/NetCatPod 9.42
347 TestNetworkPlugins/group/calico/DNS 0.23
348 TestNetworkPlugins/group/calico/Localhost 0.2
349 TestNetworkPlugins/group/calico/HairPin 0.23
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.39
352 TestNetworkPlugins/group/custom-flannel/DNS 0.27
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.28
355 TestNetworkPlugins/group/enable-default-cni/Start 52.88
356 TestNetworkPlugins/group/flannel/Start 66.38
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.42
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
362 TestNetworkPlugins/group/flannel/ControllerPod 5.04
363 TestNetworkPlugins/group/bridge/Start 91.52
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.51
365 TestNetworkPlugins/group/flannel/NetCatPod 11.4
366 TestNetworkPlugins/group/flannel/DNS 0.27
367 TestNetworkPlugins/group/flannel/Localhost 0.26
368 TestNetworkPlugins/group/flannel/HairPin 0.24
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
370 TestNetworkPlugins/group/bridge/NetCatPod 9.3
371 TestNetworkPlugins/group/bridge/DNS 0.19
372 TestNetworkPlugins/group/bridge/Localhost 0.18
373 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (14.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-851884 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-851884 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.824414028s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-851884
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-851884: exit status 85 (91.482682ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-851884 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |          |
	|         | -p download-only-851884        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:32:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:32:03.571733 1202902 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:32:03.571978 1202902 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:32:03.572005 1202902 out.go:309] Setting ErrFile to fd 2...
	I1101 00:32:03.572026 1202902 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:32:03.572320 1202902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	W1101 00:32:03.572482 1202902 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17486-1197516/.minikube/config/config.json: open /home/jenkins/minikube-integration/17486-1197516/.minikube/config/config.json: no such file or directory
	I1101 00:32:03.572917 1202902 out.go:303] Setting JSON to true
	I1101 00:32:03.573868 1202902 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29671,"bootTime":1698769053,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:32:03.573963 1202902 start.go:138] virtualization:  
	I1101 00:32:03.577588 1202902 out.go:97] [download-only-851884] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:32:03.579947 1202902 out.go:169] MINIKUBE_LOCATION=17486
	W1101 00:32:03.577832 1202902 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 00:32:03.577877 1202902 notify.go:220] Checking for updates...
	I1101 00:32:03.582365 1202902 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:32:03.584559 1202902 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:32:03.586620 1202902 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:32:03.588612 1202902 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 00:32:03.592295 1202902 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 00:32:03.592553 1202902 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:32:03.617770 1202902 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:32:03.617873 1202902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:32:03.698256 1202902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-01 00:32:03.688578442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:32:03.698359 1202902 docker.go:295] overlay module found
	I1101 00:32:03.700476 1202902 out.go:97] Using the docker driver based on user configuration
	I1101 00:32:03.700497 1202902 start.go:298] selected driver: docker
	I1101 00:32:03.700503 1202902 start.go:902] validating driver "docker" against <nil>
	I1101 00:32:03.700596 1202902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:32:03.768371 1202902 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-01 00:32:03.759043496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:32:03.768531 1202902 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1101 00:32:03.768816 1202902 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1101 00:32:03.768974 1202902 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 00:32:03.771358 1202902 out.go:169] Using Docker driver with root privileges
	I1101 00:32:03.773205 1202902 cni.go:84] Creating CNI manager for ""
	I1101 00:32:03.773227 1202902 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:32:03.773290 1202902 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 00:32:03.773310 1202902 start_flags.go:323] config:
	{Name:download-only-851884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-851884 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:32:03.775211 1202902 out.go:97] Starting control plane node download-only-851884 in cluster download-only-851884
	I1101 00:32:03.775232 1202902 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 00:32:03.776959 1202902 out.go:97] Pulling base image ...
	I1101 00:32:03.777008 1202902 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 00:32:03.777102 1202902 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 00:32:03.793505 1202902 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 to local cache
	I1101 00:32:03.793757 1202902 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local cache directory
	I1101 00:32:03.793857 1202902 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 to local cache
	I1101 00:32:03.867847 1202902 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1101 00:32:03.867872 1202902 cache.go:56] Caching tarball of preloaded images
	I1101 00:32:03.868049 1202902 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1101 00:32:03.870589 1202902 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1101 00:32:03.870614 1202902 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:32:03.990037 1202902 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1101 00:32:13.237632 1202902 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-851884"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (11.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-851884 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-851884 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.359059814s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (11.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-851884
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-851884: exit status 85 (86.742434ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-851884 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |          |
	|         | -p download-only-851884        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	| start   | -o=json --download-only        | download-only-851884 | jenkins | v1.32.0-beta.0 | 01 Nov 23 00:32 UTC |          |
	|         | -p download-only-851884        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=docker                |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/01 00:32:18
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 00:32:18.493882 1202982 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:32:18.494185 1202982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:32:18.494195 1202982 out.go:309] Setting ErrFile to fd 2...
	I1101 00:32:18.494203 1202982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:32:18.494500 1202982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	W1101 00:32:18.494621 1202982 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17486-1197516/.minikube/config/config.json: open /home/jenkins/minikube-integration/17486-1197516/.minikube/config/config.json: no such file or directory
	I1101 00:32:18.494866 1202982 out.go:303] Setting JSON to true
	I1101 00:32:18.495841 1202982 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29686,"bootTime":1698769053,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:32:18.495914 1202982 start.go:138] virtualization:  
	I1101 00:32:18.498720 1202982 out.go:97] [download-only-851884] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:32:18.501068 1202982 out.go:169] MINIKUBE_LOCATION=17486
	I1101 00:32:18.498989 1202982 notify.go:220] Checking for updates...
	I1101 00:32:18.505806 1202982 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:32:18.507861 1202982 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:32:18.509806 1202982 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:32:18.511837 1202982 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1101 00:32:18.516276 1202982 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 00:32:18.516841 1202982 config.go:182] Loaded profile config "download-only-851884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1101 00:32:18.516890 1202982 start.go:810] api.Load failed for download-only-851884: filestore "download-only-851884": Docker machine "download-only-851884" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1101 00:32:18.516979 1202982 driver.go:378] Setting default libvirt URI to qemu:///system
	W1101 00:32:18.517040 1202982 start.go:810] api.Load failed for download-only-851884: filestore "download-only-851884": Docker machine "download-only-851884" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1101 00:32:18.541517 1202982 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:32:18.541622 1202982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:32:18.636647 1202982 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-01 00:32:18.626826481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:32:18.636761 1202982 docker.go:295] overlay module found
	I1101 00:32:18.638939 1202982 out.go:97] Using the docker driver based on existing profile
	I1101 00:32:18.638982 1202982 start.go:298] selected driver: docker
	I1101 00:32:18.638990 1202982 start.go:902] validating driver "docker" against &{Name:download-only-851884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-851884 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:32:18.639174 1202982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:32:18.705462 1202982 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-01 00:32:18.69612022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:32:18.705955 1202982 cni.go:84] Creating CNI manager for ""
	I1101 00:32:18.705974 1202982 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 00:32:18.705990 1202982 start_flags.go:323] config:
	{Name:download-only-851884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-851884 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1101 00:32:18.708075 1202982 out.go:97] Starting control plane node download-only-851884 in cluster download-only-851884
	I1101 00:32:18.708098 1202982 cache.go:121] Beginning downloading kic base image for docker with crio
	I1101 00:32:18.710083 1202982 out.go:97] Pulling base image ...
	I1101 00:32:18.710109 1202982 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:32:18.710274 1202982 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local docker daemon
	I1101 00:32:18.726487 1202982 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 to local cache
	I1101 00:32:18.726634 1202982 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local cache directory
	I1101 00:32:18.726657 1202982 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 in local cache directory, skipping pull
	I1101 00:32:18.726665 1202982 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 exists in cache, skipping pull
	I1101 00:32:18.726674 1202982 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 as a tarball
	I1101 00:32:18.788086 1202982 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1101 00:32:18.788111 1202982 cache.go:56] Caching tarball of preloaded images
	I1101 00:32:18.788287 1202982 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1101 00:32:18.790947 1202982 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1101 00:32:18.790967 1202982 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:32:18.902983 1202982 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1101 00:32:28.022421 1202982 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1101 00:32:28.022530 1202982 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17486-1197516/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-851884"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-851884
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-601750 --alsologtostderr --binary-mirror http://127.0.0.1:46695 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-601750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-601750
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-864560
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-864560: exit status 85 (92.628577ms)

                                                
                                                
-- stdout --
	* Profile "addons-864560" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864560"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-864560
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-864560: exit status 85 (98.918509ms)

                                                
                                                
-- stdout --
	* Profile "addons-864560" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864560"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (148.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-864560 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-864560 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m28.305629113s)
--- PASS: TestAddons/Setup (148.31s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 57.321823ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-pscsg" [0de356cb-166c-4eeb-b9a8-cbd31f74f4bc] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016858304s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-p9xzm" [2afe95c8-ac6a-4431-8018-5c27cd0852dd] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012318835s
addons_test.go:339: (dbg) Run:  kubectl --context addons-864560 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-864560 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-864560 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.360414625s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 ip
2023/11/01 00:35:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.53s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fxrlb" [a8690821-6eff-49ee-b60a-d398b7febff4] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014039488s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-864560
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-864560: (6.000134665s)
--- PASS: TestAddons/parallel/InspektorGadget (11.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.166239ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-25rbs" [89137386-d1fd-406f-8465-066e80796edc] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.024703179s
addons_test.go:414: (dbg) Run:  kubectl --context addons-864560 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 62.020521ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-864560 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-864560 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [41830511-3d30-4f61-b877-1bbbe07872ba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [41830511-3d30-4f61-b877-1bbbe07872ba] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.011640078s
addons_test.go:583: (dbg) Run:  kubectl --context addons-864560 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-864560 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-864560 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-864560 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-864560 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-864560 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-864560 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a4e45066-da12-401a-bc63-78660a85fde7] Pending
helpers_test.go:344: "task-pv-pod-restore" [a4e45066-da12-401a-bc63-78660a85fde7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a4e45066-da12-401a-bc63-78660a85fde7] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.023591691s
addons_test.go:625: (dbg) Run:  kubectl --context addons-864560 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-864560 delete pod task-pv-pod-restore: (1.104498271s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-864560 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-864560 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-864560 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.799732976s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-864560 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-864560 --alsologtostderr -v=1: (1.146578309s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-vmgvf" [a8fb2fb5-9234-455f-8f55-58383c631545] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-vmgvf" [a8fb2fb5-9234-455f-8f55-58383c631545] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.017171899s
--- PASS: TestAddons/parallel/Headlamp (11.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-jjmng" [8b9e0204-0a0f-41a0-8c98-6b6a3884eb77] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010022149s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-864560
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.32s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-864560 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-864560 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864560 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a43fd332-d00e-4d1f-a279-b030e396e56c] Pending
helpers_test.go:344: "test-local-path" [a43fd332-d00e-4d1f-a279-b030e396e56c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a43fd332-d00e-4d1f-a279-b030e396e56c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a43fd332-d00e-4d1f-a279-b030e396e56c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.009116116s
addons_test.go:890: (dbg) Run:  kubectl --context addons-864560 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 ssh "cat /opt/local-path-provisioner/pvc-88eb9be0-9144-4090-b7b4-bfa3bc5fed6f_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-864560 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-864560 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-864560 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.32s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jttg2" [de1942f3-46cd-42dc-a069-706b386eefc0] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.023767985s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-864560
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-864560 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-864560 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-864560
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-864560: (12.062203432s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-864560
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-864560
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-864560
--- PASS: TestAddons/StoppedEnableDisable (12.38s)

                                                
                                    
x
+
TestCertOptions (37.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-537404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-537404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.024544635s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-537404 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-537404 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-537404 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-537404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-537404
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-537404: (2.059776736s)
--- PASS: TestCertOptions (37.85s)

                                                
                                    
x
+
TestCertExpiration (254.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-346605 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1101 01:22:02.259200 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-346605 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.187000419s)
E1101 01:22:55.882058 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-346605 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-346605 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (33.983263909s)
helpers_test.go:175: Cleaning up "cert-expiration-346605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-346605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-346605: (4.816282998s)
--- PASS: TestCertExpiration (254.99s)

                                                
                                    
x
+
TestForceSystemdFlag (43.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-798558 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-798558 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.813545341s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-798558 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-798558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-798558
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-798558: (2.693472041s)
--- PASS: TestForceSystemdFlag (43.87s)

                                                
                                    
x
+
TestForceSystemdEnv (44.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-812961 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-812961 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.321446944s)
helpers_test.go:175: Cleaning up "force-systemd-env-812961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-812961
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-812961: (2.982997435s)
--- PASS: TestForceSystemdEnv (44.30s)

                                                
                                    
x
+
TestErrorSpam/setup (30.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-363817 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-363817 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-363817 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-363817 --driver=docker  --container-runtime=crio: (30.44673688s)
--- PASS: TestErrorSpam/setup (30.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 start --dry-run
--- PASS: TestErrorSpam/start (0.87s)

                                                
                                    
x
+
TestErrorSpam/status (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 status
--- PASS: TestErrorSpam/status (1.19s)

                                                
                                    
x
+
TestErrorSpam/pause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 pause
--- PASS: TestErrorSpam/pause (1.92s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 unpause
--- PASS: TestErrorSpam/unpause (2.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 stop: (1.254690198s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-363817 --log_dir /tmp/nospam-363817 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17486-1197516/.minikube/files/etc/test/nested/copy/1202897/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258660 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1101 00:40:00.145049 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.152400 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.162825 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.183209 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.223733 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.304124 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.464535 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:00.785312 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:01.426150 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:02.706334 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:05.266517 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:10.387543 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 00:40:20.627981 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-258660 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.784528228s)
--- PASS: TestFunctional/serial/StartWithProxy (48.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258660 --alsologtostderr -v=8
E1101 00:40:41.108334 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-258660 --alsologtostderr -v=8: (40.711840242s)
functional_test.go:659: soft start took 40.712331627s for "functional-258660" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-258660 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 cache add registry.k8s.io/pause:3.1: (1.302111218s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 cache add registry.k8s.io/pause:3.3: (1.362650209s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 cache add registry.k8s.io/pause:latest: (1.088520281s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-258660 /tmp/TestFunctionalserialCacheCmdcacheadd_local3056433300/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cache add minikube-local-cache-test:functional-258660
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cache delete minikube-local-cache-test:functional-258660
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-258660
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (340.693809ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 cache reload: (1.143630952s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 kubectl -- --context functional-258660 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-258660 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 00:41:22.070240 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-258660 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.011140037s)
functional_test.go:757: restart took 33.011240566s for "functional-258660" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-258660 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 logs: (1.83622178s)
--- PASS: TestFunctional/serial/LogsCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 logs --file /tmp/TestFunctionalserialLogsFileCmd3713125167/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 logs --file /tmp/TestFunctionalserialLogsFileCmd3713125167/001/logs.txt: (1.86904184s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-258660 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-258660
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-258660: exit status 115 (592.279458ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31552 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-258660 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-258660 delete -f testdata/invalidsvc.yaml: (1.078851066s)
--- PASS: TestFunctional/serial/InvalidService (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 config get cpus: exit status 14 (112.935343ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 config get cpus: exit status 14 (114.729488ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-258660 --alsologtostderr -v=1]
2023/11/01 00:43:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-258660 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1228911: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-258660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (209.52038ms)

                                                
                                                
-- stdout --
	* [functional-258660] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:43:29.761940 1228690 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:43:29.762128 1228690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:29.762138 1228690 out.go:309] Setting ErrFile to fd 2...
	I1101 00:43:29.762144 1228690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:29.762422 1228690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:43:29.762796 1228690 out.go:303] Setting JSON to false
	I1101 00:43:29.763873 1228690 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30357,"bootTime":1698769053,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:43:29.763949 1228690 start.go:138] virtualization:  
	I1101 00:43:29.766435 1228690 out.go:177] * [functional-258660] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 00:43:29.768150 1228690 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:43:29.770268 1228690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:43:29.768268 1228690 notify.go:220] Checking for updates...
	I1101 00:43:29.773786 1228690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:43:29.775930 1228690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:43:29.777604 1228690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 00:43:29.779448 1228690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:43:29.782267 1228690 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:43:29.782782 1228690 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:43:29.808294 1228690 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:43:29.808414 1228690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:43:29.889633 1228690 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 00:43:29.879694544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:43:29.889744 1228690 docker.go:295] overlay module found
	I1101 00:43:29.891817 1228690 out.go:177] * Using the docker driver based on existing profile
	I1101 00:43:29.893700 1228690 start.go:298] selected driver: docker
	I1101 00:43:29.893719 1228690 start.go:902] validating driver "docker" against &{Name:functional-258660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-258660 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:43:29.893814 1228690 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:43:29.896100 1228690 out.go:177] 
	W1101 00:43:29.898113 1228690 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 00:43:29.899986 1228690 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258660 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-258660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-258660 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (213.22418ms)

                                                
                                                
-- stdout --
	* [functional-258660] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 00:43:29.548511 1228650 out.go:296] Setting OutFile to fd 1 ...
	I1101 00:43:29.548715 1228650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:29.548743 1228650 out.go:309] Setting ErrFile to fd 2...
	I1101 00:43:29.548764 1228650 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 00:43:29.549165 1228650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 00:43:29.549529 1228650 out.go:303] Setting JSON to false
	I1101 00:43:29.550560 1228650 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30357,"bootTime":1698769053,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 00:43:29.550670 1228650 start.go:138] virtualization:  
	I1101 00:43:29.554863 1228650 out.go:177] * [functional-258660] minikube v1.32.0-beta.0 sur Ubuntu 20.04 (arm64)
	I1101 00:43:29.557085 1228650 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 00:43:29.559238 1228650 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 00:43:29.557157 1228650 notify.go:220] Checking for updates...
	I1101 00:43:29.561482 1228650 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 00:43:29.563477 1228650 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 00:43:29.565462 1228650 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 00:43:29.567421 1228650 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 00:43:29.569850 1228650 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 00:43:29.570386 1228650 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 00:43:29.598845 1228650 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 00:43:29.598955 1228650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 00:43:29.679907 1228650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 00:43:29.670129922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 00:43:29.680013 1228650 docker.go:295] overlay module found
	I1101 00:43:29.682223 1228650 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1101 00:43:29.684031 1228650 start.go:298] selected driver: docker
	I1101 00:43:29.684050 1228650 start.go:902] validating driver "docker" against &{Name:functional-258660 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.41-1698773672-17486@sha256:a46d6433f6f7543af472f7b8b305faa2da36b546834792a3c1a481f02ce07458 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-258660 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1101 00:43:29.684159 1228650 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 00:43:29.686518 1228650 out.go:177] 
	W1101 00:43:29.688396 1228650 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 00:43:29.690241 1228650 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-258660 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-258660 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-pzbht" [0eee149b-b558-46db-b133-b1f4240a98fc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-pzbht" [0eee149b-b558-46db-b133-b1f4240a98fc] Running
E1101 00:42:43.990747 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 36.015952127s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32218
functional_test.go:1674: http://192.168.49.2:32218: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-pzbht

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32218
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (36.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh -n functional-258660 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 cp functional-258660:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2873392724/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh -n functional-258660 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1202897/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /etc/test/nested/copy/1202897/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1202897.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /etc/ssl/certs/1202897.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1202897.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /usr/share/ca-certificates/1202897.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12028972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /etc/ssl/certs/12028972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12028972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /usr/share/ca-certificates/12028972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-258660 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh "sudo systemctl is-active docker": exit status 1 (442.245456ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh "sudo systemctl is-active containerd": exit status 1 (394.520971ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-258660 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-258660 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-258660 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-258660 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1226140: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-258660 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-258660 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c85fbda7-2953-4881-9347-d09636c26595] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c85fbda7-2953-4881-9347-d09636c26595] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.014046809s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-258660 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.89.163 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-258660 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-258660 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-258660 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-s5xsm" [79f17352-ca15-4373-873a-05f52eeeb47e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-s5xsm" [79f17352-ca15-4373-873a-05f52eeeb47e] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.011875651s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 service list -o json
functional_test.go:1493: Took "536.507999ms" to run "out/minikube-linux-arm64 -p functional-258660 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30104
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30104
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "382.395947ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "81.858875ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "337.533004ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "75.474036ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (23.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdany-port3371680530/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698799380971778965" to /tmp/TestFunctionalparallelMountCmdany-port3371680530/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698799380971778965" to /tmp/TestFunctionalparallelMountCmdany-port3371680530/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698799380971778965" to /tmp/TestFunctionalparallelMountCmdany-port3371680530/001/test-1698799380971778965
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (398.747656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 00:43 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 00:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 00:43 test-1698799380971778965
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh cat /mount-9p/test-1698799380971778965
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-258660 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [da1d4ee5-825a-429f-899f-7d2319979444] Pending
helpers_test.go:344: "busybox-mount" [da1d4ee5-825a-429f-899f-7d2319979444] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [da1d4ee5-825a-429f-899f-7d2319979444] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [da1d4ee5-825a-429f-899f-7d2319979444] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.014035909s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-258660 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdany-port3371680530/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (23.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdspecific-port1675414702/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (380.208575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdspecific-port1675414702/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh "sudo umount -f /mount-9p": exit status 1 (303.666723ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-258660 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdspecific-port1675414702/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2317823125/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2317823125/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2317823125/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T" /mount1: exit status 1 (762.820693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-258660 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2317823125/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2317823125/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-258660 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2317823125/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258660 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-258660
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258660 image ls --format short --alsologtostderr:
I1101 00:43:59.911603 1230267 out.go:296] Setting OutFile to fd 1 ...
I1101 00:43:59.911856 1230267 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:43:59.911867 1230267 out.go:309] Setting ErrFile to fd 2...
I1101 00:43:59.911874 1230267 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:43:59.912143 1230267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
I1101 00:43:59.912912 1230267 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:43:59.913096 1230267 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:43:59.913570 1230267 cli_runner.go:164] Run: docker container inspect functional-258660 --format={{.State.Status}}
I1101 00:43:59.931623 1230267 ssh_runner.go:195] Run: systemctl --version
I1101 00:43:59.931681 1230267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258660
I1101 00:43:59.950619 1230267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34302 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/functional-258660/id_rsa Username:docker}
I1101 00:44:00.049678 1230267 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258660 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | aae348c9fbd40 | 50.2MB |
| gcr.io/google-containers/addon-resizer  | functional-258660  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | a5dd5cdd6d3ef | 69.9MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| localhost/my-image                      | functional-258660  | 622dd81c9e4cd | 1.64MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 537e9a59ee2fd | 121MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 8276439b4f237 | 117MB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 42a4e73724daa | 59.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258660 image ls --format table --alsologtostderr:
I1101 00:44:03.647606 1230575 out.go:296] Setting OutFile to fd 1 ...
I1101 00:44:03.647789 1230575 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:03.647813 1230575 out.go:309] Setting ErrFile to fd 2...
I1101 00:44:03.647833 1230575 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:03.648122 1230575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
I1101 00:44:03.648789 1230575 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:03.649010 1230575 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:03.649534 1230575 cli_runner.go:164] Run: docker container inspect functional-258660 --format={{.State.Status}}
I1101 00:44:03.669321 1230575 ssh_runner.go:195] Run: systemctl --version
I1101 00:44:03.669384 1230575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258660
I1101 00:44:03.687313 1230575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34302 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/functional-258660/id_rsa Username:docker}
I1101 00:44:03.782596 1230575 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258660 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"121054158"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36
b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50212152"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e5873224
44cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"622dd81c9e4cd79392d9190b042558d27971ab7ad599b0aa94bf94a79b5bd3bb","repoDigests":["localhost/my-image@sha256:ee7cf95a427bb0770b66e6ed21b7d5902723238d6d7c20d8d0bebea907832082"],"repoTags":["localhost/my-i
mage:functional-258660"],"size":"1640226"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50
772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"117252916"},{"id":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kub
e-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"69926807"},{"id":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"59188020"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"fc31851867039d89a4d0a1d7467245b127a63973de5b5d8a59f663
82234f41e9","repoDigests":["docker.io/library/7f65fc7c48f1f0a2329144e7e020a5a1f741b75b5317daf780ce472c740e9eac-tmp@sha256:4e19aa96c833cd3509a301a4f80bc71550b8c67fda248666b8827d905e90ccee"],"repoTags":[],"size":"1637644"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-258660"],"size":"34114467"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258660 image ls --format json --alsologtostderr:
I1101 00:44:03.398648 1230549 out.go:296] Setting OutFile to fd 1 ...
I1101 00:44:03.398832 1230549 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:03.398846 1230549 out.go:309] Setting ErrFile to fd 2...
I1101 00:44:03.398853 1230549 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:03.399127 1230549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
I1101 00:44:03.399811 1230549 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:03.399967 1230549 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:03.400545 1230549 cli_runner.go:164] Run: docker container inspect functional-258660 --format={{.State.Status}}
I1101 00:44:03.418229 1230549 ssh_runner.go:195] Run: systemctl --version
I1101 00:44:03.418330 1230549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258660
I1101 00:44:03.436216 1230549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34302 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/functional-258660/id_rsa Username:docker}
I1101 00:44:03.530705 1230549 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258660 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "117252916"
- id: 42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "59188020"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "121054158"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "69926807"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "50212152"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-258660
size: "34114467"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258660 image ls --format yaml --alsologtostderr:
I1101 00:44:00.190917 1230292 out.go:296] Setting OutFile to fd 1 ...
I1101 00:44:00.191161 1230292 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:00.191196 1230292 out.go:309] Setting ErrFile to fd 2...
I1101 00:44:00.191220 1230292 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:00.191513 1230292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
I1101 00:44:00.192288 1230292 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:00.192503 1230292 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:00.193177 1230292 cli_runner.go:164] Run: docker container inspect functional-258660 --format={{.State.Status}}
I1101 00:44:00.214241 1230292 ssh_runner.go:195] Run: systemctl --version
I1101 00:44:00.214316 1230292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258660
I1101 00:44:00.234154 1230292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34302 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/functional-258660/id_rsa Username:docker}
I1101 00:44:00.330859 1230292 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-258660 ssh pgrep buildkitd: exit status 1 (303.574129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image build -t localhost/my-image:functional-258660 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 image build -t localhost/my-image:functional-258660 testdata/build --alsologtostderr: (2.362573013s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-258660 image build -t localhost/my-image:functional-258660 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fc318518670
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-258660
--> 622dd81c9e4
Successfully tagged localhost/my-image:functional-258660
622dd81c9e4cd79392d9190b042558d27971ab7ad599b0aa94bf94a79b5bd3bb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-258660 image build -t localhost/my-image:functional-258660 testdata/build --alsologtostderr:
I1101 00:44:00.758228 1230368 out.go:296] Setting OutFile to fd 1 ...
I1101 00:44:00.759229 1230368 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:00.759246 1230368 out.go:309] Setting ErrFile to fd 2...
I1101 00:44:00.759253 1230368 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1101 00:44:00.759652 1230368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
I1101 00:44:00.760344 1230368 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:00.761138 1230368 config.go:182] Loaded profile config "functional-258660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1101 00:44:00.761796 1230368 cli_runner.go:164] Run: docker container inspect functional-258660 --format={{.State.Status}}
I1101 00:44:00.780537 1230368 ssh_runner.go:195] Run: systemctl --version
I1101 00:44:00.780596 1230368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-258660
I1101 00:44:00.798134 1230368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34302 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/functional-258660/id_rsa Username:docker}
I1101 00:44:00.894604 1230368 build_images.go:151] Building image from path: /tmp/build.1573253963.tar
I1101 00:44:00.894671 1230368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 00:44:00.905115 1230368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1573253963.tar
I1101 00:44:00.909442 1230368 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1573253963.tar: stat -c "%s %y" /var/lib/minikube/build/build.1573253963.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1573253963.tar': No such file or directory
I1101 00:44:00.909474 1230368 ssh_runner.go:362] scp /tmp/build.1573253963.tar --> /var/lib/minikube/build/build.1573253963.tar (3072 bytes)
I1101 00:44:00.937970 1230368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1573253963
I1101 00:44:00.948931 1230368 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1573253963 -xf /var/lib/minikube/build/build.1573253963.tar
I1101 00:44:00.959829 1230368 crio.go:297] Building image: /var/lib/minikube/build/build.1573253963
I1101 00:44:00.959917 1230368 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-258660 /var/lib/minikube/build/build.1573253963 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1101 00:44:03.026673 1230368 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-258660 /var/lib/minikube/build/build.1573253963 --cgroup-manager=cgroupfs: (2.066713921s)
I1101 00:44:03.026738 1230368 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1573253963
I1101 00:44:03.037663 1230368 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1573253963.tar
I1101 00:44:03.048805 1230368 build_images.go:207] Built localhost/my-image:functional-258660 from /tmp/build.1573253963.tar
I1101 00:44:03.048835 1230368 build_images.go:123] succeeded building to: functional-258660
I1101 00:44:03.048841 1230368 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.745686784s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-258660
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image load --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 image load --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr: (4.0506516s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image load --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 image load --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr: (2.663459276s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.749338195s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-258660
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image load --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 image load --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr: (3.48028079s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image save gcr.io/google-containers/addon-resizer:functional-258660 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image rm gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-258660 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.030074535s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-258660
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 image save --daemon gcr.io/google-containers/addon-resizer:functional-258660 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-258660
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 update-context --alsologtostderr -v=2
E1101 00:45:00.144587 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-258660 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-258660
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-258660
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-258660
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (97.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-992876 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1101 00:45:27.831682 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-992876 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m37.497290643s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (97.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-992876 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-769833 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1101 00:55:00.144425 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-769833 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.407418014s)
--- PASS: TestJSONOutput/start/Command (75.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-769833 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-769833 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-769833 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-769833 --output=json --user=testUser: (5.988069295s)
--- PASS: TestJSONOutput/stop/Command (5.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-412630 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-412630 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.288416ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d30dcacf-40b8-46c5-a5c6-5296fa534f77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-412630] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb7ee750-e118-4035-8b07-c70474d766e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17486"}}
	{"specversion":"1.0","id":"9bd60646-33e8-43bb-986f-7cd3c2fec2b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94e7ccbb-dcc3-4ff8-b306-2398747d752d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig"}}
	{"specversion":"1.0","id":"bbae5550-5265-4239-90a4-a765dd580721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube"}}
	{"specversion":"1.0","id":"18f2dd35-01dc-434d-aadf-7f17122363e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2246f75e-5faf-4a1f-9a18-18e1616c171b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"333e3ced-b0e8-477d-ae83-da35987afe1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-412630" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-412630
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-598354 --network=
E1101 00:56:23.191841 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-598354 --network=: (40.516907084s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-598354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-598354
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-598354: (2.157708211s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-153956 --network=bridge
E1101 00:57:02.259679 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-153956 --network=bridge: (32.652902979s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-153956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-153956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-153956: (2.046814982s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.73s)

                                                
                                    
x
+
TestKicExistingNetwork (34.5s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-204274 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-204274 --network=existing-network: (32.288464983s)
helpers_test.go:175: Cleaning up "existing-network-204274" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-204274
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-204274: (2.050012534s)
--- PASS: TestKicExistingNetwork (34.50s)

                                                
                                    
x
+
TestKicCustomSubnet (36.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-752674 --subnet=192.168.60.0/24
E1101 00:57:55.882354 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:55.889094 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:55.899326 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:55.919578 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:55.960218 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:56.042336 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:56.202680 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:56.523087 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:57.163911 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:57:58.444728 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:58:01.005094 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:58:06.126180 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 00:58:16.366364 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-752674 --subnet=192.168.60.0/24: (34.533961269s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-752674 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-752674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-752674
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-752674: (2.119867892s)
--- PASS: TestKicCustomSubnet (36.68s)

                                                
                                    
x
+
TestKicStaticIP (34.29s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-083358 --static-ip=192.168.200.200
E1101 00:58:36.846601 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-083358 --static-ip=192.168.200.200: (32.00745454s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-083358 ip
helpers_test.go:175: Cleaning up "static-ip-083358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-083358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-083358: (2.097254411s)
--- PASS: TestKicStaticIP (34.29s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (75.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-760777 --driver=docker  --container-runtime=crio
E1101 00:59:17.806850 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-760777 --driver=docker  --container-runtime=crio: (34.592740721s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-763559 --driver=docker  --container-runtime=crio
E1101 01:00:00.148588 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-763559 --driver=docker  --container-runtime=crio: (35.159063065s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-760777
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-763559
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-763559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-763559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-763559: (1.994805089s)
helpers_test.go:175: Cleaning up "first-760777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-760777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-760777: (2.035177672s)
--- PASS: TestMinikubeProfile (75.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-219914 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-219914 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.741537426s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-219914 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-221818 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-221818 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.822201667s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221818 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-219914 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-219914 --alsologtostderr -v=5: (1.685840893s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221818 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-221818
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-221818: (1.25865272s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-221818
E1101 01:00:39.727665 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-221818: (7.087036395s)
--- PASS: TestMountStart/serial/RestartStopped (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221818 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-291182 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 01:02:02.260117 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-291182 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m0.568753934s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-291182 -- rollout status deployment/busybox: (3.147244341s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-2p499 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-7m7pb -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-2p499 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-7m7pb -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-2p499 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-291182 -- exec busybox-5bc68d56bd-7m7pb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-291182 -v 3 --alsologtostderr
E1101 01:03:23.567843 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 01:03:25.303076 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-291182 -v 3 --alsologtostderr: (48.647655376s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.36s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp testdata/cp-test.txt multinode-291182:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile165111203/001/cp-test_multinode-291182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182:/home/docker/cp-test.txt multinode-291182-m02:/home/docker/cp-test_multinode-291182_multinode-291182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m02 "sudo cat /home/docker/cp-test_multinode-291182_multinode-291182-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182:/home/docker/cp-test.txt multinode-291182-m03:/home/docker/cp-test_multinode-291182_multinode-291182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m03 "sudo cat /home/docker/cp-test_multinode-291182_multinode-291182-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp testdata/cp-test.txt multinode-291182-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile165111203/001/cp-test_multinode-291182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182-m02:/home/docker/cp-test.txt multinode-291182:/home/docker/cp-test_multinode-291182-m02_multinode-291182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182 "sudo cat /home/docker/cp-test_multinode-291182-m02_multinode-291182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182-m02:/home/docker/cp-test.txt multinode-291182-m03:/home/docker/cp-test_multinode-291182-m02_multinode-291182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m03 "sudo cat /home/docker/cp-test_multinode-291182-m02_multinode-291182-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp testdata/cp-test.txt multinode-291182-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile165111203/001/cp-test_multinode-291182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182-m03:/home/docker/cp-test.txt multinode-291182:/home/docker/cp-test_multinode-291182-m03_multinode-291182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182 "sudo cat /home/docker/cp-test_multinode-291182-m03_multinode-291182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 cp multinode-291182-m03:/home/docker/cp-test.txt multinode-291182-m02:/home/docker/cp-test_multinode-291182-m03_multinode-291182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 ssh -n multinode-291182-m02 "sudo cat /home/docker/cp-test_multinode-291182-m03_multinode-291182-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-291182 node stop m03: (1.261846399s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-291182 status: exit status 7 (562.965978ms)

                                                
                                                
-- stdout --
	multinode-291182
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-291182-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-291182-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr: exit status 7 (563.45253ms)

                                                
                                                
-- stdout --
	multinode-291182
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-291182-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-291182-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 01:04:01.245474 1276684 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:04:01.245599 1276684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:04:01.245609 1276684 out.go:309] Setting ErrFile to fd 2...
	I1101 01:04:01.245616 1276684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:04:01.245877 1276684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:04:01.246057 1276684 out.go:303] Setting JSON to false
	I1101 01:04:01.246106 1276684 mustload.go:65] Loading cluster: multinode-291182
	I1101 01:04:01.246189 1276684 notify.go:220] Checking for updates...
	I1101 01:04:01.246538 1276684 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:04:01.246553 1276684 status.go:255] checking status of multinode-291182 ...
	I1101 01:04:01.247057 1276684 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:04:01.266939 1276684 status.go:330] multinode-291182 host status = "Running" (err=<nil>)
	I1101 01:04:01.266964 1276684 host.go:66] Checking if "multinode-291182" exists ...
	I1101 01:04:01.267383 1276684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182
	I1101 01:04:01.291554 1276684 host.go:66] Checking if "multinode-291182" exists ...
	I1101 01:04:01.291846 1276684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:04:01.291892 1276684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182
	I1101 01:04:01.316639 1276684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34367 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182/id_rsa Username:docker}
	I1101 01:04:01.415681 1276684 ssh_runner.go:195] Run: systemctl --version
	I1101 01:04:01.421173 1276684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:04:01.434846 1276684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:04:01.508298 1276684 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-01 01:04:01.498538502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:04:01.508951 1276684 kubeconfig.go:92] found "multinode-291182" server: "https://192.168.58.2:8443"
	I1101 01:04:01.509029 1276684 api_server.go:166] Checking apiserver status ...
	I1101 01:04:01.509093 1276684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 01:04:01.522484 1276684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	I1101 01:04:01.533860 1276684 api_server.go:182] apiserver freezer: "10:freezer:/docker/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/crio/crio-6179ef1243c86f08394db15efe63b5ef66d3bf8a51e0edc60ff26943df68739d"
	I1101 01:04:01.533933 1276684 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/065d29e000af942be75697e274ce4d3d1ae2d6a4ea343e2286dbc55c3a59ee59/crio/crio-6179ef1243c86f08394db15efe63b5ef66d3bf8a51e0edc60ff26943df68739d/freezer.state
	I1101 01:04:01.544212 1276684 api_server.go:204] freezer state: "THAWED"
	I1101 01:04:01.544244 1276684 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1101 01:04:01.552973 1276684 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1101 01:04:01.553166 1276684 status.go:421] multinode-291182 apiserver status = Running (err=<nil>)
	I1101 01:04:01.553178 1276684 status.go:257] multinode-291182 status: &{Name:multinode-291182 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 01:04:01.553195 1276684 status.go:255] checking status of multinode-291182-m02 ...
	I1101 01:04:01.553508 1276684 cli_runner.go:164] Run: docker container inspect multinode-291182-m02 --format={{.State.Status}}
	I1101 01:04:01.571300 1276684 status.go:330] multinode-291182-m02 host status = "Running" (err=<nil>)
	I1101 01:04:01.571326 1276684 host.go:66] Checking if "multinode-291182-m02" exists ...
	I1101 01:04:01.571631 1276684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-291182-m02
	I1101 01:04:01.589271 1276684 host.go:66] Checking if "multinode-291182-m02" exists ...
	I1101 01:04:01.589589 1276684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 01:04:01.589639 1276684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-291182-m02
	I1101 01:04:01.607289 1276684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34372 SSHKeyPath:/home/jenkins/minikube-integration/17486-1197516/.minikube/machines/multinode-291182-m02/id_rsa Username:docker}
	I1101 01:04:01.703442 1276684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 01:04:01.716999 1276684 status.go:257] multinode-291182-m02 status: &{Name:multinode-291182-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 01:04:01.717035 1276684 status.go:255] checking status of multinode-291182-m03 ...
	I1101 01:04:01.717345 1276684 cli_runner.go:164] Run: docker container inspect multinode-291182-m03 --format={{.State.Status}}
	I1101 01:04:01.736800 1276684 status.go:330] multinode-291182-m03 host status = "Stopped" (err=<nil>)
	I1101 01:04:01.736825 1276684 status.go:343] host is not running, skipping remaining checks
	I1101 01:04:01.736834 1276684 status.go:257] multinode-291182-m03 status: &{Name:multinode-291182-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-291182 node start m03 --alsologtostderr: (11.67570034s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-291182
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-291182
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-291182: (25.082407576s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-291182 --wait=true -v=8 --alsologtostderr
E1101 01:05:00.145068 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-291182 --wait=true -v=8 --alsologtostderr: (1m34.536010108s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-291182
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-291182 node delete m03: (4.369660511s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-291182 stop: (23.881538512s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-291182 status: exit status 7 (106.817471ms)

                                                
                                                
-- stdout --
	multinode-291182
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-291182-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr: exit status 7 (110.867161ms)

                                                
                                                
-- stdout --
	multinode-291182
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-291182-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 01:06:43.218675 1284984 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:06:43.218892 1284984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:06:43.218903 1284984 out.go:309] Setting ErrFile to fd 2...
	I1101 01:06:43.218910 1284984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:06:43.219214 1284984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:06:43.219429 1284984 out.go:303] Setting JSON to false
	I1101 01:06:43.219520 1284984 mustload.go:65] Loading cluster: multinode-291182
	I1101 01:06:43.219573 1284984 notify.go:220] Checking for updates...
	I1101 01:06:43.221110 1284984 config.go:182] Loaded profile config "multinode-291182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:06:43.221130 1284984 status.go:255] checking status of multinode-291182 ...
	I1101 01:06:43.221729 1284984 cli_runner.go:164] Run: docker container inspect multinode-291182 --format={{.State.Status}}
	I1101 01:06:43.240464 1284984 status.go:330] multinode-291182 host status = "Stopped" (err=<nil>)
	I1101 01:06:43.240497 1284984 status.go:343] host is not running, skipping remaining checks
	I1101 01:06:43.240504 1284984 status.go:257] multinode-291182 status: &{Name:multinode-291182 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 01:06:43.240534 1284984 status.go:255] checking status of multinode-291182-m02 ...
	I1101 01:06:43.240828 1284984 cli_runner.go:164] Run: docker container inspect multinode-291182-m02 --format={{.State.Status}}
	I1101 01:06:43.258291 1284984 status.go:330] multinode-291182-m02 host status = "Stopped" (err=<nil>)
	I1101 01:06:43.258314 1284984 status.go:343] host is not running, skipping remaining checks
	I1101 01:06:43.258322 1284984 status.go:257] multinode-291182-m02 status: &{Name:multinode-291182-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-291182 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1101 01:07:02.259659 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 01:07:55.881346 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-291182 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.770170799s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-291182 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-291182
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-291182-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-291182-m02 --driver=docker  --container-runtime=crio: exit status 14 (101.945329ms)

                                                
                                                
-- stdout --
	* [multinode-291182-m02] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-291182-m02' is duplicated with machine name 'multinode-291182-m02' in profile 'multinode-291182'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-291182-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-291182-m03 --driver=docker  --container-runtime=crio: (30.670655911s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-291182
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-291182: exit status 80 (364.790245ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-291182
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-291182-m03 already exists in multinode-291182-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-291182-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-291182-m03: (2.02304501s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.24s)

                                                
                                    
x
+
TestPreload (178.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-326605 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1101 01:10:00.144117 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-326605 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m31.900399932s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-326605 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-326605 image pull gcr.io/k8s-minikube/busybox: (1.904044512s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-326605
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-326605: (5.827604275s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-326605 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-326605 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m16.257706245s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-326605 image list
helpers_test.go:175: Cleaning up "test-preload-326605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-326605
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-326605: (2.472021012s)
--- PASS: TestPreload (178.62s)

                                                
                                    
x
+
TestScheduledStopUnix (105.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-562817 --memory=2048 --driver=docker  --container-runtime=crio
E1101 01:12:02.260164 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-562817 --memory=2048 --driver=docker  --container-runtime=crio: (28.547849049s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-562817 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-562817 -n scheduled-stop-562817
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-562817 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-562817 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-562817 -n scheduled-stop-562817
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-562817
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-562817 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1101 01:12:55.882469 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 01:13:03.192523 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-562817
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-562817: exit status 7 (91.53639ms)

                                                
                                                
-- stdout --
	scheduled-stop-562817
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-562817 -n scheduled-stop-562817
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-562817 -n scheduled-stop-562817: exit status 7 (86.695912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-562817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-562817
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-562817: (5.215426392s)
--- PASS: TestScheduledStopUnix (105.63s)

                                                
                                    
x
+
TestInsufficientStorage (13.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-298515 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-298515 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.09962686s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ba47e96f-c8c8-492e-8863-f7451bee4488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-298515] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb0be73b-8c2a-4b05-b1a6-be19d6580a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17486"}}
	{"specversion":"1.0","id":"43726b8e-1a37-4594-b742-7bd4c4ea5a16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04fe2918-f96a-4015-a4a8-5aae8a424249","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig"}}
	{"specversion":"1.0","id":"4f24cb16-b381-484a-8577-d9a086b8707a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube"}}
	{"specversion":"1.0","id":"20b070d2-7fca-460a-9ac3-ba8c392c4793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f7bd3825-f9e0-4196-8f4a-556573bef922","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7b6f67c2-45b2-4857-8e44-6ecd4db7b9b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"837ae47d-4cdb-4292-9ac4-2b8ef4a54645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f3d8df7a-d206-43b8-828d-cedf59eb2343","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e64b234a-5b61-4a7f-87f8-d20be9b8312f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9cac7135-32c2-4ec9-b8c4-d74756d933d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-298515 in cluster insufficient-storage-298515","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"66b22635-5b8c-4b14-9ade-671ea5f767b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"53eceb7f-4ce4-4e84-b95a-8bdb4013aa3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3983e8e5-9d08-4464-a714-04e2fa4e5a64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-298515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-298515 --output=json --layout=cluster: exit status 7 (324.653277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-298515","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-298515","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 01:13:37.093457 1301601 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-298515" does not appear in /home/jenkins/minikube-integration/17486-1197516/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-298515 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-298515 --output=json --layout=cluster: exit status 7 (317.339289ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-298515","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-298515","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 01:13:37.412578 1301655 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-298515" does not appear in /home/jenkins/minikube-integration/17486-1197516/kubeconfig
	E1101 01:13:37.424579 1301655 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/insufficient-storage-298515/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-298515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-298515
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-298515: (1.936720241s)
--- PASS: TestInsufficientStorage (13.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 01:15:00.144450 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.828015749s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-886011
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-886011: (2.211722581s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-886011 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-886011 status --format={{.Host}}: exit status 7 (88.379054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.412795891s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-886011 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (124.188159ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-886011] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-886011
	    minikube start -p kubernetes-upgrade-886011 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8860112 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-886011 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-886011 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.883391069s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-886011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-886011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-886011: (2.404639698s)
--- PASS: TestKubernetesUpgrade (391.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-796476 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-796476 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.718557ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-796476] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-796476 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-796476 --driver=docker  --container-runtime=crio: (42.169256418s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-796476 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-796476 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-796476 --no-kubernetes --driver=docker  --container-runtime=crio: (9.624245561s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-796476 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-796476 status -o json: exit status 2 (370.308327ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-796476","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-796476
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-796476: (2.80408357s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-796476 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-796476 --no-kubernetes --driver=docker  --container-runtime=crio: (9.630656572s)
--- PASS: TestNoKubernetes/serial/Start (9.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-796476 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-796476 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.11373ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-796476
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-796476: (1.294171014s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-796476 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-796476 --driver=docker  --container-runtime=crio: (7.218994278s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-796476 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-796476 "sudo systemctl is-active --quiet service kubelet": exit status 1 (532.746423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-506779
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestPause/serial/Start (80.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-447547 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1101 01:20:00.144199 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 01:20:05.303371 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-447547 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.574996131s)
--- PASS: TestPause/serial/Start (80.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-447547 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-447547 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.762292334s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-447547 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-447547 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-447547 --output=json --layout=cluster: exit status 2 (365.918547ms)

                                                
                                                
-- stdout --
	{"Name":"pause-447547","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-447547","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-447547 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-447547 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.85s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-447547 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-447547 --alsologtostderr -v=5: (2.847360293s)
--- PASS: TestPause/serial/DeletePaused (2.85s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-447547
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-447547: exit status 1 (19.509214ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-447547: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-450738 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-450738 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (307.268591ms)

                                                
                                                
-- stdout --
	* [false-450738] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17486
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 01:21:33.133443 1340386 out.go:296] Setting OutFile to fd 1 ...
	I1101 01:21:33.133685 1340386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:21:33.133713 1340386 out.go:309] Setting ErrFile to fd 2...
	I1101 01:21:33.133739 1340386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1101 01:21:33.134053 1340386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17486-1197516/.minikube/bin
	I1101 01:21:33.134502 1340386 out.go:303] Setting JSON to false
	I1101 01:21:33.135645 1340386 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32641,"bootTime":1698769053,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1101 01:21:33.135752 1340386 start.go:138] virtualization:  
	I1101 01:21:33.139079 1340386 out.go:177] * [false-450738] minikube v1.32.0-beta.0 on Ubuntu 20.04 (arm64)
	I1101 01:21:33.142065 1340386 out.go:177]   - MINIKUBE_LOCATION=17486
	I1101 01:21:33.142136 1340386 notify.go:220] Checking for updates...
	I1101 01:21:33.145999 1340386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 01:21:33.148089 1340386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17486-1197516/kubeconfig
	I1101 01:21:33.150214 1340386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17486-1197516/.minikube
	I1101 01:21:33.152089 1340386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1101 01:21:33.154091 1340386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 01:21:33.156786 1340386 config.go:182] Loaded profile config "force-systemd-flag-798558": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 01:21:33.156935 1340386 driver.go:378] Setting default libvirt URI to qemu:///system
	I1101 01:21:33.187296 1340386 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1101 01:21:33.187416 1340386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 01:21:33.334167 1340386 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-01 01:21:33.319507977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1101 01:21:33.334273 1340386 docker.go:295] overlay module found
	I1101 01:21:33.336762 1340386 out.go:177] * Using the docker driver based on user configuration
	I1101 01:21:33.338726 1340386 start.go:298] selected driver: docker
	I1101 01:21:33.338740 1340386 start.go:902] validating driver "docker" against <nil>
	I1101 01:21:33.338753 1340386 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 01:21:33.341294 1340386 out.go:177] 
	W1101 01:21:33.343376 1340386 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 01:21:33.345273 1340386 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-450738 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-450738" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-450738

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-450738"

                                                
                                                
----------------------- debugLogs end: false-450738 [took: 4.815047909s] --------------------------------
helpers_test.go:175: Cleaning up "false-450738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-450738
--- PASS: TestNetworkPlugins/group/false (5.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-461409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1101 01:25:00.144848 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-461409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m20.072292558s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-461409 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f7441e60-c9ba-4468-a239-a9f5229c3104] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f7441e60-c9ba-4468-a239-a9f5229c3104] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.040510892s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-461409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-461409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-461409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057857055s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-461409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-461409 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-461409 --alsologtostderr -v=3: (12.380264118s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-461409 -n old-k8s-version-461409
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-461409 -n old-k8s-version-461409: exit status 7 (117.263532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-461409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (442.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-461409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-461409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m21.715851074s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-461409 -n old-k8s-version-461409
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (442.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-943728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:27:02.259278 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-943728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m7.500344524s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-943728 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dfd7897b-6f4a-443c-a370-9515dac9a08b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dfd7897b-6f4a-443c-a370-9515dac9a08b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.037656264s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-943728 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-943728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-943728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069439321s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-943728 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-943728 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-943728 --alsologtostderr -v=3: (12.098645274s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-943728 -n no-preload-943728
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-943728 -n no-preload-943728: exit status 7 (94.038233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-943728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (349.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-943728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:27:55.881788 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 01:29:43.192771 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 01:30:00.144684 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 01:30:58.928442 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 01:32:02.260215 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 01:32:55.881944 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-943728 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m48.485428411s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-943728 -n no-preload-943728
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (349.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-x5r2q" [00c0929c-4a28-4f49-9671-589c71729d5a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022429517s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-x5r2q" [00c0929c-4a28-4f49-9671-589c71729d5a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011503161s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-461409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-461409 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-461409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-461409 --alsologtostderr -v=1: (1.328902319s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-461409 -n old-k8s-version-461409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-461409 -n old-k8s-version-461409: exit status 2 (568.374508ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-461409 -n old-k8s-version-461409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-461409 -n old-k8s-version-461409: exit status 2 (572.586023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-461409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-461409 --alsologtostderr -v=1: (1.255208736s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-461409 -n old-k8s-version-461409
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-461409 -n old-k8s-version-461409: (1.006175503s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-461409 -n old-k8s-version-461409
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s7ljz" [dad2d9eb-6e23-4fb1-8456-af785fd944bf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s7ljz" [dad2d9eb-6e23-4fb1-8456-af785fd944bf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.037366045s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-613794 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-613794 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m25.7238043s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-s7ljz" [dad2d9eb-6e23-4fb1-8456-af785fd944bf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014163846s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-943728 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-943728 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-943728 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-943728 -n no-preload-943728
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-943728 -n no-preload-943728: exit status 2 (439.167629ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-943728 -n no-preload-943728
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-943728 -n no-preload-943728: exit status 2 (448.143917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-943728 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-943728 --alsologtostderr -v=1: (1.180167465s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-943728 -n no-preload-943728
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-943728 -n no-preload-943728
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-327782 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-327782 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (57.61816657s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-327782 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5834b33e-aa10-4a39-830e-a0137c326934] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5834b33e-aa10-4a39-830e-a0137c326934] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.035349647s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-327782 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-613794 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3596729e-db0a-4939-b8f6-8f2302a29d52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1101 01:35:00.144077 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3596729e-db0a-4939-b8f6-8f2302a29d52] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.030913443s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-613794 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-327782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-327782 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126513493s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-327782 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-327782 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-327782 --alsologtostderr -v=3: (12.127330814s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-613794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-613794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.027456572s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-613794 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-613794 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-613794 --alsologtostderr -v=3: (12.097771508s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782: exit status 7 (88.449465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-327782 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (352.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-327782 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-327782 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m51.926936579s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (352.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-613794 -n embed-certs-613794
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-613794 -n embed-certs-613794: exit status 7 (90.380135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-613794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (613.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-613794 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:35:28.170711 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.175945 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.186231 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.206560 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.246827 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.327176 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.488152 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:28.809178 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:29.450005 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:30.730867 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:33.291978 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:38.412343 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:35:48.653122 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:36:09.133687 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:36:45.304159 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 01:36:50.094830 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:37:02.259680 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 01:37:17.199065 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.204337 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.214636 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.234893 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.276034 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.356383 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.516789 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:17.837094 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:18.477594 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:19.758566 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:22.319091 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:27.439979 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:37.680685 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:37:55.882205 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
E1101 01:37:58.161100 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:38:12.015488 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:38:39.121569 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:40:00.144865 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 01:40:01.041803 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
E1101 01:40:28.171280 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:40:55.856311 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-613794 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m13.374193297s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-613794 -n embed-certs-613794
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (613.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnvp7" [d80f25e9-d2bc-4fdf-ad55-9bb3df8f82b4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnvp7" [d80f25e9-d2bc-4fdf-ad55-9bb3df8f82b4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.028772651s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bnvp7" [d80f25e9-d2bc-4fdf-ad55-9bb3df8f82b4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012748825s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-327782 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-327782 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-327782 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782: exit status 2 (388.286693ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782: exit status 2 (364.754542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-327782 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-327782 -n default-k8s-diff-port-327782
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-766747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:42:02.259923 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 01:42:17.199287 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-766747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (46.568024938s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-766747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-766747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.110949985s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-766747 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-766747 --alsologtostderr -v=3: (1.279169315s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-766747 -n newest-cni-766747
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-766747 -n newest-cni-766747: exit status 7 (97.414163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-766747 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-766747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1101 01:42:44.882553 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-766747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (29.444942335s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-766747 -n newest-cni-766747
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-766747 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-766747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-766747 -n newest-cni-766747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-766747 -n newest-cni-766747: exit status 2 (369.370864ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-766747 -n newest-cni-766747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-766747 -n newest-cni-766747: exit status 2 (375.740148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-766747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-766747 -n newest-cni-766747
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-766747 -n newest-cni-766747
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.317283886s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-450738 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vcbhx" [1c7938c8-bcda-4ea8-b996-5ed18b3b3359] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vcbhx" [1c7938c8-bcda-4ea8-b996-5ed18b3b3359] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.01118031s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (46.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1101 01:44:56.660620 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/default-k8s-diff-port-327782/client.crt: no such file or directory
E1101 01:45:00.144205 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/addons-864560/client.crt: no such file or directory
E1101 01:45:01.781017 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/default-k8s-diff-port-327782/client.crt: no such file or directory
E1101 01:45:12.022082 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/default-k8s-diff-port-327782/client.crt: no such file or directory
E1101 01:45:28.171227 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:45:32.503156 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/default-k8s-diff-port-327782/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (46.805686053s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (46.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wl6h4" [76b21666-820f-48b2-af0f-3d4dd2a69bae] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027779107s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wl6h4" [76b21666-820f-48b2-af0f-3d4dd2a69bae] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010757815s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-613794 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fd4vj" [915d997a-02d8-453a-a073-485e5311664f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.034573453s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-613794 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-450738 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-613794 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-613794 --alsologtostderr -v=1: (1.2033229s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-613794 -n embed-certs-613794
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-613794 -n embed-certs-613794: exit status 2 (379.905265ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-613794 -n embed-certs-613794
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-613794 -n embed-certs-613794: exit status 2 (375.40155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-613794 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-613794 --alsologtostderr -v=1: (1.295058626s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-613794 -n embed-certs-613794
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-613794 -n embed-certs-613794
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.76s)
E1101 01:50:19.225880 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/default-k8s-diff-port-327782/client.crt: no such file or directory
E1101 01:50:28.170687 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/old-k8s-version-461409/client.crt: no such file or directory
E1101 01:50:41.959807 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:41.965146 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:41.975459 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:41.995719 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:42.035896 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:42.116375 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:42.276719 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:42.597242 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:43.238270 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:43.437623 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:50:44.518507 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:47.079301 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
E1101 01:50:52.199682 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jrm7g" [54e0cccd-5140-4396-aebf-7c41a66bbe3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jrm7g" [54e0cccd-5140-4396-aebf-7c41a66bbe3b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.012348876s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m23.4408776s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1101 01:47:02.260925 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/functional-258660/client.crt: no such file or directory
E1101 01:47:17.198637 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/no-preload-943728/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.781321236s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d8h29" [d55d82d1-c3d9-436a-b0e7-ac44789bb899] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.047209371s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-450738 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wntst" [363ce4b5-ede0-428c-b829-63e5918c15bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wntst" [363ce4b5-ede0-428c-b829-63e5918c15bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.011831473s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-450738 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6cqdt" [544b78f8-64eb-43b7-a8f2-6cc8b96b2226] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 01:47:38.928742 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/ingress-addon-legacy-992876/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6cqdt" [544b78f8-64eb-43b7-a8f2-6cc8b96b2226] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.018695714s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (52.884347131s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.376320651s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-450738 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-skqrp" [8e666122-4b90-4e44-8290-c43a836bfe14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-skqrp" [8e666122-4b90-4e44-8290-c43a836bfe14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.01121127s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
E1101 01:49:21.506394 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
helpers_test.go:344: "kube-flannel-ds-4hr9m" [2a3fee04-693b-47bc-a481-593bcb26c975] Running
E1101 01:49:21.512405 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:21.523403 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:21.543833 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:21.583993 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:21.664292 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:21.824580 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:22.144683 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
E1101 01:49:22.785366 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.041994493s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-450738 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m31.515854613s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-450738 "pgrep -a kubelet"
E1101 01:49:26.626717 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cvlhp" [30f5fc4a-4b14-42f2-8a64-0ce1d6958730] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 01:49:31.747268 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/auto-450738/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-cvlhp" [30f5fc4a-4b14-42f2-8a64-0ce1d6958730] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011328501s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-450738 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-450738 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qdxgg" [42c21a13-c5b9-4ce1-87df-7d9fb7cb6844] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qdxgg" [42c21a13-c5b9-4ce1-87df-7d9fb7cb6844] Running
E1101 01:51:02.440264 1202897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17486-1197516/.minikube/profiles/kindnet-450738/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009308981s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-450738 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-450738 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (29/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-263246 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-263246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-263246
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-386632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-386632
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-450738 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-450738" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-450738

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-450738"

                                                
                                                
----------------------- debugLogs end: kubenet-450738 [took: 4.862285906s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-450738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-450738
--- SKIP: TestNetworkPlugins/group/kubenet (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-450738 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-450738" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-450738

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-450738" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-450738"

                                                
                                                
----------------------- debugLogs end: cilium-450738 [took: 6.836361222s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-450738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-450738
--- SKIP: TestNetworkPlugins/group/cilium (7.08s)

                                                
                                    
Copied to clipboard